Category Archives: Technology

A techie’s day out :)

SonarQube Code Quality Platform

As an architect my core responsibility is to look over improving the quality of software delivery. One of the core aspect of keeping the quality is code reviews. I am always inclined and insistent upon offline manual code reviews. But in distributed teams and large settings, it is highly difficult to keep up the pace of development that happens over the week or even day by day. Another aspect of it is on organizational level, the consistency of manual code reviews are difficult to keep up. Also, there is particular focus area for each code review likes of systems components written in different programming languages,performance, maintainability, security, unit testing or design reviews etc. I have seen many mid and small organizations still struggling for later part of institutionalizing delivery quality across the projects or products.

My quest for institutionalizing software quality landed me onto SonarQube (also reffered as “Sonar”) platform. It is open platform to manage code quality. It is developed with JAVA but fairly easy to install and use. It also works in an enviornment where system is made of different components which are written in different prorgamming languages. I have been using SonarQube from last two years and pretty happy with the results of it. It is one of the tool which greatly helps to bring in the much needed change in entire organization or product towards technical quality and engineering.

Without much ado, let’s see how you can get your hands on with SonarQube and help your organization to setup code quality platform. We are going to use SonarQube 5.0.1 version.

SonarQube Installation and Configuration

Pre-requisites

1. Download and install Java JDK 1.7.  Check your operating system version whether it’s 32 bit or 64 bit and select the appropriate package. This is important since based on JDK version, we need to select SonarQube installation package.

Url: http://www.oracle.com/technetwork/java/javase/downloads/java-se-JDK-7-download-432154.html

2. Set JAVA_HOME to c:\ as shown in following snapshot. Make sure, to set it into “User Variables” section as well as in PATH variable under “System Variables”.

Environment Variables

Environment Variables

Please DO NOT FORGET to restart your system after creating/changing/editing or modifying any environment variable entries.

Download following software/libraries

FxCop 10

http://download-codeplex.sec.s-msft.com/Download/Release?ProjectName=fxcopinstaller&DownloadId=821386&FileTime=130407655516000000&Build=20959

StyleCop 4.7.49.0

http://download-codeplex.sec.s-msft.com/Download/Release?ProjectName=stylecop&DownloadId=323236&FileTime=130408175287730000&Build=20959

System Properties Window

System Properties Window

OpenCover 4.5.3522

https://github.com/OpenCover/opencover/releases/download/4.5.3522/opencover.4.5.3522.msi

Please note, while installing OpenCover choose advance option and select to install for “all users”. Set the installation path to C:\Program Files (x86)\OpenCover. With default options openCover installs under c:\users\<your username>\

SonarQube v.5.0.1

http://dist.sonar.codehaus.org/sonarqube-5.0.1.zip

Extract the zip to c:\sonar

Sonar Runner v. 2.4

http://repo1.maven.org/maven2/org/codehaus/sonar/runner/sonar-runner-dist/2.4/sonar-runner-dist-2.4.zip

Extract the zip to c:\sonarrunner.

And create new Environment variable under User variable name as “SONAR_RUNNER_HOME” and set its value to “C:\sonarrunner”

Also, edit path variable under “System Variables” and append value “; C:\sonarrunner\bin;” as shown in following snapshot.

Please DO NOT FORGET to restart your system after creating/changing/editing or modifying any environment variable entries.

Download Sample Project

https://github.com/SonarSource/sonar-examples/archive/master.zip

This has various sample projects for sonar which are good for demo purpose. Extract the zip anywhere and since we are only interested with .net projects, take CSharp folder and copy to C Drive i.e. your project is at “c:\csharp”

To run sonarQube Server

1. Go to “C:\sonar\bin” folder. According to version of your OS, choose appropriate folder. For example, I am running Windows 8.1 64 bit OS so I have chosen “windows-x86-64” folder. If you are working on Windows XP 32 bit edition, choose “windows-x86-32” folder and open it.

2. From Command Prompt Run “StartSonar.bat”. This will keep SonarQube server running. If everything goes smoothly, you will see prompt like

StartSonar.Bat

StartSonar.Bat

If you face any error at this stage, please check whether you have selected correct JDK version (32 bit/64 bit) installation. Also, verify all environment variables are correct or not.

Now, you can visit, http://localhost:9000 and you will be greeted with default SonarQube page. Login there using “admin/admin” credentials. Keep the command line SonarQube process running so as long you want to run the server.

Configuring C# Plugins and mapping Paths

1.Once you login, go to Update Center by navigating to following path. On Top right corner, select Settings->(on left navigation pane) System ->Update Center->Available Plugins

Install the plugins mentioned in following snapshot. You will see install button once, you click on the plugin name. For example, below screenshot come for JAVA.

Mapping Path for SonarQube Rule Engine

Java SonarQube Plugin Installation

SonarDashboard

Sonar Dashboard

2. After Installation, now, we need to set the plugin [Tools like FxCop, StyleCop and OpenCover] local paths. So, navigate to, Settings->General Settings->Select Category C# ->Code Analysis/C#

3. Set “Path to FxCopCmd.exe” to “C:/Program Files (x86)/Microsoft Fxcop 10.0/FxCopCmd.exe” and save the settings.

4. Now, click on Rules menu once you click on Activate in->select the “Sonar Way “Quality profile. If you don’t see any profile in the dropdown, make sure to set the “Sonar Way” profile as default Quality profile under Quality profile tab.

SonarQube QualityProfile

SonarQube Quality Profile

How to Run SonarQube

SonarQube works on concept of server and client. Each project/Solution/Codebase has its client settings file called as “sonar-project.properties” file. It is a convention and best practice to keep this file in the same folder as that of visual studio solution file.

We need to author this “Sonar-project.Properties” file for each visual studio solution.

Running Code Qualtiy Analysis for sample Project

1. Now, from command prompt navigate to “C:\csharp” and execute the command “sonarrunner”

2. This will take few minutes and give you execution Success message at the end. If there are errors, you can rerun the process “Sonarrunner –e” to view the errors.

3. Now, browse to http://localhost:9000/ and select “CSharp Playground” to view the dashboard for sample project. Dashboard should look like as shown below.

Sonar Final Dashboard

Sonar Final Dashboard

Please note that, for this post, I have configured bare metal SonarQube. As you keep analyzing specified codebase again in future, it will also show you comparisons and trends and lot more data than displayed in above diagram. There is great plugin ecosystems that gives you various perspectives on your codebase through SonarQube Plugins. For more interesting statics and demo, you can also visit to http://nemo.sonarqube.org

Things To Do After SonarQube Code Quality Analysis

Please be mindful that SonarQube gives much needed platform for code quality but do not fall into pray of being only number obsessed.

Code Quality is journey towards technical excellence and SonarQube platform or Sonar Code Quality index gives one of the important perspective towards it.

Difference between Log Shipping and Database Mirroring

This is self study notes kind of post.

Log shipping

  • Primary server, secondary server and monitor server are the components in log shipping set up.
  • Monitor server is an optional.
  • Log shipping is a manual failover process.
  • There will not be an automatic application connection redirection. It has to be redirected manually.
  • Log shipping will have multiple secondary databases for the synchronization.
  • There will be data transfer latency.
  • In log shipping, secondary database can be used for reporting solution.
  • Both committed and uncommitted transactions are transferred to the secondary database.
  • Log shipping supports both bulk logged recovery model and full recovery model.

With Log Shipping:

  • Data Transfer: T-Logs are backed up and transferred to secondary server
  • Transactional Consistency: All committed and un-committed are transferred
  • Server Limitation: Can be applied to multiple stand-by servers
  • Failover: Manual
  • Failover Duration: Can take more than 30 minutes
  • Role Change: Role change is manual
  • Client Re-direction: Manual changes required

 

Database Mirroring

  • Principal server, mirror server, and witness server are the components involve in database mirroring set up.
  • Witness server is an optional but it is a must for setting up automatic failover since witness is a watchdog instance to check if principal server is working.
  • Database mirroring is an automatic failover process.
  • Application connection can be redirected automatically with proper configuration.
  • Database mirroring will not have multiple database destinations for mirroring the principal database. It will have one mirror database synchronizes with principal database.
  • There will not be data transfer latency.
  • In database mirroring, mirror database cannot be used for reporting solution. If need comes, database snapshot should be created to set up for the reporting solution.
  • Only committed transactions are transferred to the mirror database.
  • Mirroring supports only Full Recovery model.

With Database Mirroring:

  • Data Transfer: Individual T-Log records are transferred using TCP endpoints
  • Transactional Consistency: Only committed transactions are transferred
  • Server Limitation: Can be applied to only one mirror server
  • Failover: Automatic
  • Failover Duration: Failover is fast, sometimes < 3 seconds but not more than 10 seconds
  • Role Change: Role change is fully automatic
  • Client Re-direction: Fully automatic as it uses .NET 2.0

 

Please note that database mirroring feature will be discontinued with SQL Server 2014 and Microsoft is recommending AlwaysOn feature instead of Log shipping or database mirroring for that version onwards.

Considerations for PCI-DSS Compliant Solution Development – Part 2

For earlier 9 points kindly refer to my earlier blog at Considerations for PCI-DSS Compliant Solution Development – Part 1

  1. Develop applications based on secure coding guidelines. Prevent common coding vulnerabilities in software development processes, to include the following:a. Documentation of impact: document the impact of change in code or customization of software.b. Documented change approval by authorized parties.c. Functionality testing to verify that the change does not adversely impact the security of the system.d. Back out Procedures
  2. Testing should be done to avoid any flaws like SQL injection. Also consider OS Command Injection, LDAP and XPath injection flaws, Buffer overflows, cross site scripting attacks and cross site request forgery (CSRF).
  3. Develop all web applications based on secure coding guidelines such as the Open Web Application Security Project guidelines. Review custom application code to identify coding vulnerabilities. Cover prevention of common coding vulnerabilities in software development processes, to include the following:
    • Un-validated input
    • Broken access control (for example, malicious use of user IDs)
    • Broken authentication and session management (use of account credentials and session cookies)
    • Cross-site scripting (XSS) attacks
    • Buffer overflows
    • Injection flaws (for example, structured query language (SQL) injection)
    • Improper error handling
    • Insecure storage (cryptographic or otherwise)
    • Denial of service
    • Security Misconfiguration
    • Insecure Direct Object References
    • Cross-Site Request Forgery (CSRF)
    • Failure to Restrict URL Access
    • Insufficient Transport Layer Protection
    • Unvalidated Redirects and Forwards
  4. SSL protects data that is transmitted between a browser and web server. It is critical that you have SSL enabled on the web server, and this should be among the first steps taken after installation.
    • Web server must be configured to use SSL v3 or TLS v1 protocols with strong encryption (128-bit or longer key is required)
    • Install SSL certificate issued for specified web domain.
  5. PCI compliance requires that you use unique user names and secure authentication to access any PCs, servers, and databases with payment applications and/or cardholder data. This means that you should use different user names/passwords:a. For your Web hosting account administration area (Web hosting account where your online store is hosted)b. For FTP access to the Web serverc. For Remote Desktop Connection to the Web server (if available)d. To connect to the MySQL server that contains your store data.
  6. Audit trails
    Audit trails/logs are should be automatically enabled with the default installation of software solution. There should be no option to disable audit logging.
    The following types of activity should be logged:a. All actions taken by any individual with root or administrative privilegesb. Initialization of the audit logsc. User sign in and sign out.Individual access to cardholder data is not logged, because cardholder data is not stored before and after authentication. Access to audit trails must be provided on the operating system level.Each log event includes:1. Type of event

    2. Date and time of event

    3. Username and IP address

    4. Success or failure indication

    5. Action which led to the event

    6. Component which led to the event

  7. Wireless communicationsa. If you use wireless networking to access software, it is your responsibility to ensure your wireless security con figuration follows the PCI DSS requirements.b. Personal firewall software should be installed on any mobile and employee-owned computers that have direct access to the internet and are also used to access your network.c. Change wireless vendor defaults, including but not limited to, wired equivalent privacy (WEP) keys, default service set identifier (SSID), passwords and SNMP community strings. Disable SSID broadcasts. Enable WiFi protected access (WPA and WPA2) technology for encryption and authentication when WPA-capable.d. Encrypt wireless transmissions by using WiFi protected access (WPA or WPA2) technology, IPSEC VPN, or SSL/TLS.e. Never rely exclusively on wired equivalent privacy (WEP) to protect confidentiality and access to a wireless LAN. If WEP is used, do the following:f. Use with a minimum 104-bit encryption key and 24 bit-initialization value

    g. Use ONLY in conjunction with WiFi protected access (WPA or WPA2) technology, VPN, or SSL/TLS

    h. Rotate shared WEP keys quarterly (or automatically if the technology permits)

    i. Rotate shared WEP keys whenever there are changes in personnel with access to keys

    j. Restrict access based on media access cod e (MAC) address.

    k. Install perimeter firewalls between any wireless networks and the cardholder data environment, and configure these firewalls to deny any traffic from the wireless environment or to control any traffic if it is necessary for business purposes.

  8. Remote access
    Software provides web-based access using two-factor authentication based on one-time PIN codes.a. If you enable remote access to your network and the cardholder data environment, you must implement two-factor authentication. Use technologies such as remote authentication and dial-in service (RADIUS) or terminal access controller access control system (TACACS) with tokens; or VPN (based on SSL/TLS or IPSEC) with individual certificates. You should make sure that any remote access software is securely configured by keeping in mind the following:b. Change default settings in the remote access software (for example, change default passwords and use unique passwords for each customer)c. Allow connections only from specific (known) IP/MAC addressesd. Use strong authentication or complex passwords for loginse. Enable encrypted data transmissionf. Enable account lockout after a certain number of failed login attempts

    g. Configure the system so a remote user must establish a Virtual Private Network (“VPN”) connection via a firewall before access is allowed

    h. Enable any logging or auditing functions

    i. Restrict access to customer passwords to authorized reseller/integrator personnel

    j. Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis (for example, online, archived, or restorable from backup).

Considerations for PCI-DSS Compliant Solution Development – Part 1

Following are the considerations for the development and Implementation of software solutions in a PCI-DSS Compliant Environment. These should be treated as functional and/or quality requirements while developing PCI DSS Compliant solution.

  1. Ensure that all system components and software are protected from known vulnerabilities by having the latest vendor supplied security patches installed. Install critical security patches within one month of release. This applies to all frameworks as well as operating systems and other software installed in production environment.
  2. The PCI-DSS requires that access to all systems in the payment processing environment be protected through use of unique users and complex passwords. Unique user accounts indicate that every account used is associated with an individual user with no use of generic group accounts used by more than one user. Additionally any default accounts provided with operating systems, databases and/or devices should be removed/disabled/renamed as soon as possible.E.g. Default administrator accounts include “administrator” (Windows systems), “sa” (SQL/MSDE), and “root” (UNIX/Linux) should be disabled or removed.The PCI-DSS standard requires the following password complexity for compliance (often referred to as using “strong passwords”):

    a. Passwords must be at least 7 characters

    b. Passwords must include both numeric and alphabetic characters

    c. Passwords must be changed at least every 90 days

    d. New passwords can’t be the same as the last 4 passwords

    The PCI-DSS user account requirements beyond uniqueness and password complexity are as follows:

    a. If an incorrect password is provided 6 times the account should be locked out

    b. Account lock out duration should be at least 30 min. (or until an administrator resets it)

    c. Sessions idle for more than 15 minutes should require re-entry of username and password to reactivate the session.

    d. Do not use group, shared or generic user accounts

     

  3. PCI DSS applies wherever account data is stored, processed or transmitted. The primary account number is the defining factor in the applicability of PCI DSS requirements. PCI DSS requirements are applicable if a primary account number (PAN) is stored, processed, or transmitted. If PAN is not stored, processed or transmitted, PCI DSS requirements do not apply.The following table illustrates commonly used elements of cardholder and sensitive authentication data, whether storage of each data element is permitted or prohibited, and whether each data element must be protected. This table is not exhaustive, but is presented to illustrate the different types of requirements that apply to each data element.

     

  4. Removal of custom application accounts, user IDs, and passwords before applications become active or are released to customers.
  5. Review of custom code prior to release to production or customers in order to identify any potential coding vulnerability.
  6. There should be separate development/test and production environments.
  7. Reduce the number of personnel with access to the production environment and cardholder data minimizes risk and helps ensure that access is limited to those individuals with a business need to know.
  8. Production data (live PANs) are not used for testing or development.
  9. Test data and accounts should be removed from production code before the application becomes active.

PCI Compliance Overview

 

PCI DSS version 2.0 must be adopted by all organizations with payment card data by 1 January 2011, and from 1 January 2012 all assessments must be against version 2.0 of the standard.

It specifies the 12 requirements for compliance, organized into six logically-related groups, which are called “control objectives”.

Control Objectives PCI DSS Requirements
Build and Maintain a Secure Network 1. Install and maintain a firewall configuration to protect cardholder data

2. Do not use vendor-supplied defaults for system passwords and other security parameters

Protect Cardholder Data 3. Protect stored cardholder data

4. Encrypt transmission of cardholder data across open, public networks

Maintain a Vulnerability Management Program 5. Use and regularly update anti-virus software on all systems commonly affected by malware

6. Develop and maintain secure systems and applications

Implement Strong Access Control Measures 7. Restrict access to cardholder data by business need-to-know

8. Assign a unique ID to each person with computer access

9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks 10. Track and monitor all access to network resources and cardholder data

11. Regularly test security systems and processes

Maintain an Information Security Policy 12. Maintain a policy that addresses information security

Eligibility for PA-DSS Validation:

Applications won’t be considered for PA-DSS Validation, if the ANY of the following point is “TRUE”:

  1. Application is released in beta version.
  2. Application handle cardholder data, but the application itself does not facilitate authorization or settlement.
  3. Application facilitates authorization or settlement, but has no access to cardholder data or sensitive authentication data.
  4. Application require source code customization or significant configuration by the customer (as opposed to being sold and installed “off the shelf”) such that the changes impact one or more PA-DSS requirements.
  5. Application a back-office system that stores cardholder data but does not facilitate authorization or settlement of credit card transactions. For example:
    • Reporting and CRM
    • Rewards or fraud scoring
  6. The application developed in-house and only used by the company that developed the application.
  7. The application developed and sold to a single customer for the sole use of that customer.
  8. The application function as a shared library (such as a DLL) that must be implemented with another software component in order to function, but that is not bundled (that is, sold, licensed and/or distributed as a single package) with the supporting software components.
  9. The application function as a shared library (such as a DLL) that must be implemented with another software component in order to function, but that is not bundled (that is, sold, licensed and/or distributed as a single package) with the supporting software components.
  10. The application a single module that is not submitted as part of a suite, and that does not facilitate authorization or settlement on its own.
  11. The application offered only as software as a service (SAAS) that is not sold, distributed, or licensed to third parties.
  12. The application an operating system, database or platform; even one that may store, process, or transmit cardholder data.
  13. The application operates on any consumer electronic handheld device (e.g., smart phone, tablet or PDA) that is not solely dedicated to payment acceptance for transaction processing.

For custom software development projects, “Requirement 6: Develop and maintain secure systems and applications” section is more applicable and needs to be taken care while doing the system design and in coding.

PCI Compliance Introduction

The Payment Card Industry (PCI) has developed security standards for handling cardholder information in a published standard called the PCI-DSS Data Security Standard (DSS). The security requirements defined in the DSS apply to all members, merchants, and service providers that store, process or transmit cardholder data.

The PCI-DSS requirements apply to all system components within the payment application environment which is defined as any network device, host, or application included in, or connected to, a network segment where cardholder data is stored, processed or transmitted.

The purpose of this document is to guide help software development of project which require PCI-DSS compliance implementation.

This document also explains the Payment Card Industry (PCI) initiative and the Payment Application Data Security Standard (PA-DSS) guidelines. The document then provides specific installation, configuration, and on-going management best practices for PA-DSS Certified application operating in a PCI-DSS compliant environment.

Difference between PCI-DSS Compliance and PA-DSS Validation:

As a software vendor, our responsibility is to ensure that our solution does conform to industry best practices when handling, managing and storing payment related information.

PA-DSS is the standard against which Solutions has been tested, assessed, and certified.

PCI-DSS Compliance is then later obtained by the merchant, and is an assessment of end-client’s actual server (or hosting) environment.

Obtaining “PCI-DSS Compliance” is the responsibility of the merchant and client’s hosting provider, working together, using PCI-DSS compliant server architecture with proper hardware & software configurations and access control procedures.

The PA-DSS Certification is intended to ensure that the solutions will help you achieve and maintain PCI-DSS Compliance with respect to how solutions handles user accounts, passwords, encryption, and other payment data related information.

PCI Security Standards Council Reference Documents:

The following documents provide additional detail surrounding the PCI SSC and related security programs (PA-DSS, PCI-DSS)

Combination Therapy for Requirements using BDD and UML

Problem

Generally, we write requirements in use case structure. Use case structure gives us all elements required but by design , it is a vague format which are devised around use case diagrams. Every software professional has its own flavor of writing use cases. Even the sections of use case structure varies person to person.The intrinsic details of the parts of use case specially use case description, preconditions and post conditions etc] are vaguely defined. Writing use cases in clear format is also a skill and that depends upon the person who is responsible for collecting requirements.Use case are still useful in mature kind of organization but it is not so much effective way when we do not want to have upfront big designs. So dilemma here is how to collect the requirements and the new approach should be better than standard use case structures and should be fairly objective.

There is also a second angle to this use case discussion. Requirements have bigger impact on business and cost part of IT projects. Many times, due to not so clear communication between IT vendor and client over the requirements, this cost of lot of rework and hence the cost to client.From client’s viewpoint ,it is difficult to understand use case type of language and there is a chance to miss out or assume requirements/scenarios ,which becomes cost(in terms of money or time) for them afterword. Situation becomes bad in fixed cost projects. I had experienced this bitter pill many times.

Solution

So the solution I am proposing here is , Behavior Driven Development [BDD] kind of requirements collection for even the people who are not practicing BDD. From past few months,I am sort of wandering in TDD,BDD forest. Now I am going ahead with nBehave and Specflow and Gherkin which has changed whole lot of things for me on coding front.

I totally understand that BDD can’t be fully rolled out in some scenarios/in some organizations[no silver bullet!, lack of expertise, no enthusiasm to learn, cost of learning curve and transformation etc] but I am sure that, we can at least move to BDD style of requirement writing which is kind of closer to silver bullet:). See the following example. I had directly taken this example from Dan North’s blog. BTW, Dan is pioneer in BDD and I guess, he is the first who coined the term "BDD". He has also compared use case format with BDD style. Here I think, interesting angle from my side is merge both of them to take advantage from both styles of requirements collection.

In BDD, The requirements are collected in story form like:

Title (one line describing the story)

Narrative:

As a [role]

I want [feature]

So that [benefit]

Acceptance Criteria: (presented as Scenarios)

Scenario 1: Title

Given [context] And [some more context]…

When [event]

Then [outcome] And [another outcome]…

Scenario 2: …

ACTUAL EXAMPLE:

Story: Account Holder withdraws cash

As an Account Holder

I want to withdraw cash from an ATM

So that I can get money when the bank is closed

Scenario 1: Account has sufficient funds

Given the account balance is \$100 And the card is valid And the machine contains enough money

When the Account Holder requests \$20

Then the ATM should dispense \$20 And the account balance should be \$80 And the card should be returned

Scenario 2: Account has insufficient funds

Given the account balance is \$10 And the card is valid And the machine contains enough money

When the Account Holder requests \$20

Then the ATM should not dispense any money And the ATM should say there are insufficient funds And the account balance should be \$20 And the card should be returned

Scenario 3: Card has been disabled

Given the card is disabled

When the Account Holder requests \$20

Then the ATM should retain the card

And the ATM should say the card has been retained

Scenario 4: The ATM has insufficient funds

I think Along with story, if we have mockup/screenshot[off course with Balsamiq mockups! ] of proposed screen and control, data type range chart we used to have with our existing UML format, we can move towards clear requirements.

Benefits

This will benefit in following ways:

  1. It will help to gain common understanding between client and IT Vendor,about what has been covered as requirements and what has not.
  2. Client will find easy to go through requirements since requirements are written in very structured fashion and in plain English.
  3. Organization wide, requirements will be captured in same fashion and we can really estimate based on stories and find how simple or complex the story is based upon no. and complexity of scenarios it had. Hence its really easy for going with statistical process control for estimation.
  4. Change Requests are dealt with adding/removing/updating the feature/story/scenarios.My observation here is they only change the scenarios. In very rare cases, features or stories are changed.
  5. Testing team will reap the greater benefits here since they can write these stories as per their understanding and take part in RE process in very active way like finding out the missing/not clear scenarios etc.
  6. Testing team do not have to write any other test cases, This can serve as live requirements as well as test cases for them.
  7. Unit Testing for Developers will be easy since now they clearly know what kind of scenarios they have to cover which is bit difficult and time consuming in Use case format.

Let me know your thoughts, specially BDD and UML experts…

Many good points are noted as what should be in story by BDD pioneer Dan north in his post (http://dannorth.net/whats-in-a-story).

It’s really refreshing for me to see the BDD light. I will cover an example of my solution and more on requirements collection thought process that I follow, in next post.

 

Share this post :