SonarQube Code Quality Platform

As an architect my core responsibility is to look over improving the quality of software delivery. One of the core aspect of keeping the quality is code reviews. I am always inclined and insistent upon offline manual code reviews. But in distributed teams and large settings, it is highly difficult to keep up the pace of development that happens over the week or even day by day. Another aspect of it is on organizational level, the consistency of manual code reviews are difficult to keep up. Also, there is particular focus area for each code review likes of systems components written in different programming languages,performance, maintainability, security, unit testing or design reviews etc. I have seen many mid and small organizations still struggling for later part of institutionalizing delivery quality across the projects or products.

My quest for institutionalizing software quality landed me onto SonarQube (also reffered as “Sonar”) platform. It is open platform to manage code quality. It is developed with JAVA but fairly easy to install and use. It also works in an enviornment where system is made of different components which are written in different prorgamming languages. I have been using SonarQube from last two years and pretty happy with the results of it. It is one of the tool which greatly helps to bring in the much needed change in entire organization or product towards technical quality and engineering.

Without much ado, let’s see how you can get your hands on with SonarQube and help your organization to setup code quality platform. We are going to use SonarQube 5.0.1 version.

SonarQube Installation and Configuration

Pre-requisites

1. Download and install Java JDK 1.7.  Check your operating system version whether it’s 32 bit or 64 bit and select the appropriate package. This is important since based on JDK version, we need to select SonarQube installation package.

Url: http://www.oracle.com/technetwork/java/javase/downloads/java-se-JDK-7-download-432154.html

2. Set JAVA_HOME to c:\ as shown in following snapshot. Make sure, to set it into “User Variables” section as well as in PATH variable under “System Variables”.

Environment Variables

Environment Variables

Please DO NOT FORGET to restart your system after creating/changing/editing or modifying any environment variable entries.

Download following software/libraries

FxCop 10

http://download-codeplex.sec.s-msft.com/Download/Release?ProjectName=fxcopinstaller&DownloadId=821386&FileTime=130407655516000000&Build=20959

StyleCop 4.7.49.0

http://download-codeplex.sec.s-msft.com/Download/Release?ProjectName=stylecop&DownloadId=323236&FileTime=130408175287730000&Build=20959

System Properties Window

System Properties Window

OpenCover 4.5.3522

https://github.com/OpenCover/opencover/releases/download/4.5.3522/opencover.4.5.3522.msi

Please note, while installing OpenCover choose advance option and select to install for “all users”. Set the installation path to C:\Program Files (x86)\OpenCover. With default options openCover installs under c:\users\<your username>\

SonarQube v.5.0.1

http://dist.sonar.codehaus.org/sonarqube-5.0.1.zip

Extract the zip to c:\sonar

Sonar Runner v. 2.4

http://repo1.maven.org/maven2/org/codehaus/sonar/runner/sonar-runner-dist/2.4/sonar-runner-dist-2.4.zip

Extract the zip to c:\sonarrunner.

And create new Environment variable under User variable name as “SONAR_RUNNER_HOME” and set its value to “C:\sonarrunner”

Also, edit path variable under “System Variables” and append value “; C:\sonarrunner\bin;” as shown in following snapshot.

Please DO NOT FORGET to restart your system after creating/changing/editing or modifying any environment variable entries.

Download Sample Project

https://github.com/SonarSource/sonar-examples/archive/master.zip

This has various sample projects for sonar which are good for demo purpose. Extract the zip anywhere and since we are only interested with .net projects, take CSharp folder and copy to C Drive i.e. your project is at “c:\csharp”

To run sonarQube Server

1. Go to “C:\sonar\bin” folder. According to version of your OS, choose appropriate folder. For example, I am running Windows 8.1 64 bit OS so I have chosen “windows-x86-64” folder. If you are working on Windows XP 32 bit edition, choose “windows-x86-32” folder and open it.

2. From Command Prompt Run “StartSonar.bat”. This will keep SonarQube server running. If everything goes smoothly, you will see prompt like

StartSonar.Bat

StartSonar.Bat

If you face any error at this stage, please check whether you have selected correct JDK version (32 bit/64 bit) installation. Also, verify all environment variables are correct or not.

Now, you can visit, http://localhost:9000 and you will be greeted with default SonarQube page. Login there using “admin/admin” credentials. Keep the command line SonarQube process running so as long you want to run the server.

Configuring C# Plugins and mapping Paths

1.Once you login, go to Update Center by navigating to following path. On Top right corner, select Settings->(on left navigation pane) System ->Update Center->Available Plugins

Install the plugins mentioned in following snapshot. You will see install button once, you click on the plugin name. For example, below screenshot come for JAVA.

Mapping Path for SonarQube Rule Engine

Java SonarQube Plugin Installation

SonarDashboard

Sonar Dashboard

2. After Installation, now, we need to set the plugin [Tools like FxCop, StyleCop and OpenCover] local paths. So, navigate to, Settings->General Settings->Select Category C# ->Code Analysis/C#

3. Set “Path to FxCopCmd.exe” to “C:/Program Files (x86)/Microsoft Fxcop 10.0/FxCopCmd.exe” and save the settings.

4. Now, click on Rules menu once you click on Activate in->select the “Sonar Way “Quality profile. If you don’t see any profile in the dropdown, make sure to set the “Sonar Way” profile as default Quality profile under Quality profile tab.

SonarQube QualityProfile

SonarQube Quality Profile

How to Run SonarQube

SonarQube works on concept of server and client. Each project/Solution/Codebase has its client settings file called as “sonar-project.properties” file. It is a convention and best practice to keep this file in the same folder as that of visual studio solution file.

We need to author this “Sonar-project.Properties” file for each visual studio solution.

Running Code Qualtiy Analysis for sample Project

1. Now, from command prompt navigate to “C:\csharp” and execute the command “sonarrunner”

2. This will take few minutes and give you execution Success message at the end. If there are errors, you can rerun the process “Sonarrunner –e” to view the errors.

3. Now, browse to http://localhost:9000/ and select “CSharp Playground” to view the dashboard for sample project. Dashboard should look like as shown below.

Sonar Final Dashboard

Sonar Final Dashboard

Please note that, for this post, I have configured bare metal SonarQube. As you keep analyzing specified codebase again in future, it will also show you comparisons and trends and lot more data than displayed in above diagram. There is great plugin ecosystems that gives you various perspectives on your codebase through SonarQube Plugins. For more interesting statics and demo, you can also visit to http://nemo.sonarqube.org

Things To Do After SonarQube Code Quality Analysis

Please be mindful that SonarQube gives much needed platform for code quality but do not fall into pray of being only number obsessed.

Code Quality is journey towards technical excellence and SonarQube platform or Sonar Code Quality index gives one of the important perspective towards it.

Tapari Meetings

Today morning I was checking my Facebook account and came across Harvard Business Review article over “How to Run a Meeting” (https://hbr.org/1976/03/how-to-run-a-meeting). I was overwhelmed by sheer length of the article. I must admit, I hate those long, unfruitful and agenda-less meetings. I think everybody does, isn’t it?

I started wondering, why HBR need to write such a long article on such a simple thing that we assume that everybody knows. It is just matter of checking all peoples’ calendar and sending them meeting invite and put the subject, venue etc. and done. In meeting as well, its straight forward, one person usually be the host, put forth the agenda and we discuss and at the end of the meeting, we share minutes of the meeting and action points. What is such a difficult thing in taking meetings?

The problem is not in taking meetings, it’s all about how we looked at this tool of communication and how effectively it’s been used in corporate world. If you really look again in HBR article, it’s been written in year 1976 and still the advice applies and still HBR fellows thought it is relevant in today’s world. It also means, not much things has been changed since last 40 years. We all, still curse ill-executed meetings. Traditionally executed meetings are always failures and count towards loss of productivity. Can we do something differently? Can we change, how we conduct meetings?

My views had been bit different on how to conduct the meeting thing. I am a big believer of 37 Signal’s philosophy of “Meetings are toxic” but also not able to avoid meetings completely. From last 5-6 years, I have been trying something different for meetings with my teams. After practicing with various teams and with different size companies in India, I think, it has made many of our meetings more productive and kept teams engaged. Most important is, we saved a lot of time which we might have wasted in conducting meetings in traditional sense.

My method is simple. I call it “Tapari meetings”. Together all team-members go for tea/coffee outside the office on nearby Tapari (tea-stall on the streets or around the street corners in India). If you don’t want to go outside office for some reason, then go to office pantry or gather around coffee machine and without any formal settings, start to discuss the topics everyone wanted to discuss. The topics can vary from what we need to do today to anything that is relevant to project/work. If team is like still relevantly new to each other, then you can start with common un-official topics like Sports, bikes, cars or any tourist places or anything. The important point is, team as proceed further with this daily routine, develops their own rules. You do not have to worry about setting rules in stones. My experience is developer and test engineers even nailed difficult bugs just by discussing them in such informal meetings and even executives has taken big decisions this way.

Advantages of such “Tapari Meetings”:

  1. keeps your meetings cheerful and awake. Since people walk to the Tapari (Indian tea-stall on the streets or around the street corners) or coffee machines.
  2. Meeting will run only for the time, people are having their tea/coffee and interested in the discussion.
  3. If you don’t have any set agenda, you don’t waste other people’s time.At least they can have tea-coffee and come back to their place.
  4. Soft issues can also be resolved since there can be “one more cutting de na bhaiyya” (One more tea, please) to sort out the issues.
  5. People only speak to the point and thinking on the feet since they don’t have ppts and any kind of papers.
  6. Since its only meeting where you have to remember things afterword, my observations are, people tend to conclude on action points and not interested in receiving minutes of meetings afterwards.

I understand that there will be few people who will always question about such drastic changes like having meeting outside and without any dashboard, ppts or even paper but I request you to try this method for few weeks and you will see the results.

Failure of our (may be specific to India only) traditional office system is highlighted when we think work only happens when people are at desk. Surprisingly, even president of USA and Prime Minister of India can work out their deals while having tea and walk. Do you think, your work is more important and more complex than theirs?

Let me know your thoughts.

Difference between Log Shipping and Database Mirroring

This is self study notes kind of post.

Log shipping

  • Primary server, secondary server and monitor server are the components in log shipping set up.
  • Monitor server is an optional.
  • Log shipping is a manual failover process.
  • There will not be an automatic application connection redirection. It has to be redirected manually.
  • Log shipping will have multiple secondary databases for the synchronization.
  • There will be data transfer latency.
  • In log shipping, secondary database can be used for reporting solution.
  • Both committed and uncommitted transactions are transferred to the secondary database.
  • Log shipping supports both bulk logged recovery model and full recovery model.

With Log Shipping:

  • Data Transfer: T-Logs are backed up and transferred to secondary server
  • Transactional Consistency: All committed and un-committed are transferred
  • Server Limitation: Can be applied to multiple stand-by servers
  • Failover: Manual
  • Failover Duration: Can take more than 30 minutes
  • Role Change: Role change is manual
  • Client Re-direction: Manual changes required

 

Database Mirroring

  • Principal server, mirror server, and witness server are the components involve in database mirroring set up.
  • Witness server is an optional but it is a must for setting up automatic failover since witness is a watchdog instance to check if principal server is working.
  • Database mirroring is an automatic failover process.
  • Application connection can be redirected automatically with proper configuration.
  • Database mirroring will not have multiple database destinations for mirroring the principal database. It will have one mirror database synchronizes with principal database.
  • There will not be data transfer latency.
  • In database mirroring, mirror database cannot be used for reporting solution. If need comes, database snapshot should be created to set up for the reporting solution.
  • Only committed transactions are transferred to the mirror database.
  • Mirroring supports only Full Recovery model.

With Database Mirroring:

  • Data Transfer: Individual T-Log records are transferred using TCP endpoints
  • Transactional Consistency: Only committed transactions are transferred
  • Server Limitation: Can be applied to only one mirror server
  • Failover: Automatic
  • Failover Duration: Failover is fast, sometimes < 3 seconds but not more than 10 seconds
  • Role Change: Role change is fully automatic
  • Client Re-direction: Fully automatic as it uses .NET 2.0

 

Please note that database mirroring feature will be discontinued with SQL Server 2014 and Microsoft is recommending AlwaysOn feature instead of Log shipping or database mirroring for that version onwards.

Book Notes: “The Phoenix project: A Novel about IT, DevOps, and helping Your Business Win”

From so many days, I have not posted anything since I was almost like a lost but now wait is over. Over the last weekend and this weekend, I managed to read through a really interesting book and that is “The Phoenix project: A Novel about IT, DevOps, and helping Your Business Win” by Gene Kim,Kevin Behr and George spafford.

The book almost falls in line with the way Eliyahu Goldratt’s “The Goal” but the twist here is it is applied to IT industry. The book is written with fast paced plot of a sinking organization which almost has chaotic IT and business relationships and how it overcome the obstacles and continually improves with the help of laid out principles (Three ways) in the book.

I have read “The Goal” way before I was able to digest the material/principles laid out and always struggled to correlate it with IT industry but ‘Phoenix Project’ does excellent job here. For my future reference & time saving purpose, I had put down below notes which are taken from the book.

  • When there are Chaos. Start prioritizing and estimating work. While doing so, you cannot stay away from fighting fires.
  • Knowing is better than not knowing things.
  • Stay Focused at wider goals.
  • WIP is a silent killer. Therefore, one of the most critical mechanisms in the management of any plant is job and materials release. Without it , you can’t control WIP. WIP= work in progress.
  • Apparently event undisciplined mobs can get lucky too.
  • Read Air Traffic Control Books, Highlight on ubiquitous terminology used between air traffic controller and pilots of aircrafts. Signify accidents of plane crashes that happens. – Useful for DDD.
  • Whiteboards, paper, physical movements and physical presence *engages * people and increases their *involvement* in their projects thus increasing success rate of projects.
  • As a consultant, my goal is to observe and seek to understand.
  • Project is getting delayed, all tasks by Dev team are marked as completed and QAs are still finding twice as many broken features/defects as are getting fixed. – Classic situation of badly run project.
  • Processes are supposed to protect people from Distractions and help them deliver their core objectives.
  • “I think” this might have happened or the I think the bugs are because of such and such thing. Such statements are sure signs of the problems that goes un noticed. It shows that we are flying without compass (data) and map (direction).
  • Interesting term J ! FUBAR = Fucked up beyond all recognition.
  • Management gut check from my team.
  • There are four types of work in IT Operations: 1. Business Projects 2. Internal IT projects 3. Changes and maintenance work 4. Unplanned work
  • Prioritization will help till one point but we need to identify what our constraint (real bottleneck in entire flow of operations) is and guard it from unscheduled work as well as keep it busy on top priority work.
  • Focus on Work centers. A work center is made up of a man, machine, methods and measures.
  • After chaos and constraints are figured out, work on single most important item/project which is required for survival.
  • Once you get some success around, plan for all projects which does not involve constraints/ constrained work centers.
  • Improving daily work is even more important than doing daily work.
  • Ensure that we’re continually putting tension into the system, so that we’re continually reinforcing habits and improving something. Resilience engineering tells us that we should routinely inject faults into the system, doing them frequently, to make them less painful. This is called as improvement kata.
  • Repetition creates habits and habits enable mastery.
  • Our goal is to maximize flow.
  • You win when you protect the organization from putting the meaningless work into the IT system. You win even more when you can take meaningless work out of the IT system.
  • Avoid scoping errors.
  • Create work centers and lanes of work.
  • Understand Upstream and Downstream processes?
  • Color coding of cards:
    • Purple cards for changes supporting one of the top five business projects; otherwise, they are yellow.
    • The Green cards are for internal IT improvement projects. [Give 20% of cycle time to these]
    • Pink cards are blocked tasks that are needs to be reviewed twice a day.

Make sure that there is a right balance of purple and green cards in work

  • Improving something anywhere not at the constraint is an illusion.
  • How to prioritize projects?
    • Do they increase the flow of project work through IT organization?
    • Do they increase operational stability or decrease the time required to detect and recover from outages or security breaches?
    • Do they increase specified constraint’s capacity?
  • Projects that decrease your organizations/major project’s throughput, swamp the most constrained resource in organization, decrease scalability, availability , survivability, sustainability, security, supportability should be prioritized on low or entirely discarded, if possible.
  • Managing the IT operations production schedule is one of the job for IT Operations top management.
  • Wait Time = % of resource busy / % of resource time idle
  • Wait time depend upon resource utilization.I.e. if a resource is 90% busy then wait time is 90% / 10% = 9 units of time i.e. 9 hours.
  • Create Constant feedback loops from IT operations back to development, designing quality into the product at the early stages.
  • You might have deployed an amazing technology (virtualization/cloud), but because you haven’t changed the way you work, you haven’t actually diminished the limitation.
  • The flow of work goes in one direction only: forward.
  • Takt time=Cycle time needed in order to keep up with customer demand. If any operation in the flow of work takes longer than the takt time, you will not be able to keep up with customer demand. So in IT, if your deployment time or environment setup time is greater than cycle time you will have a problem.
  • DevOps is more and more important and their unified goal is to serve business goals. So, instead of fighting with each other, they need to be more collaborative.
  • Read book: Continuous Delivery by Jez Humble and Dave Farley.
  • Business agility is not about just raw speed. It’s about how good you are at detecting and responding changes in the market and being able to take larger and more calculated risks. It’s about continual experimentation.
  • Read about Scott cook’s experiments in Intuit.
  • The way to beat competition is out-experiment them.
  • Features are always a gamble. Only ten percent will get the desired benefits. So the faster you can get those features to market and test them. Incidentally, you also pay back the business faster for the use of capital, which means the business starts making money faster.
  • For above reason, you need to target ten or more deploys per day in production environment.
  • Value Stream Mapping is quite useful tool for discovering activities that are adding value and those which are waste.
  • BIGGEST LEARNING FOR ME: DESIGN YOUR SYSTEMS FOR IT OPERATIONS!! Build as many possible feature knobs and controls with which we can switch on and switch off the features. Learn Dark launches, canary releases as soon as possible.
  • To routinely improve things, inject large faults in the system. It’s been followed in Apple mac OS and Netflix as well. These projects are called as “Simian Army Chaos Monkey”. Read more on these experiments and improvements. This creates culture that reinforces the value of taking risks and learning from failures and the need for repetition and practice to create mastery.
  • IT is not merely a department, it is pervasive like electricity.
  • In order to survive, the business and IT can’t make decisions exclusive of each other.

This has been a good read in so many months probably years. I look forward to read further through number of books such as Toyota Production System, DevOps cookbook, all lean literature and practice Improvement kata’s.

Super Easy SPA with Durandal

I have not been very active from almost one and half year on this blog. There has been lot of learning, unlearning had happened in this duration. By the way, to revive my blog, I have invited my good friend Akhlesh Tiwari to share his overall JavaScript expertise with all of us. He happily agreed and here is the first article on his favorite subject i.e. Single Page Applications. Please do encourage the new author and share your feedback so that we all can improve.

Durandal is open source JavaScript library for SPA. Durandal is built on libs like jQuery, Knockout and RequireJS. So if you are already familiar with these libraries then very easily you can start making amazing single page apps.

Durandal is technology agnostic SPA framework so you can use it with any backend technology or make pure html/JavaScript app. To get start with Durandal all you need to get JavaScript libraries and modules and follow folder structure. So for this tutorial we take asp.net MVC4 as backend.

Get Start with Durandal in .Net

In .net we can download Durandal template from  VSIX file . Durandal is also available through nuget or you can install Durandal starter kit with this command Install-Package Durandal.StarterKit .but I will use manual setup for Durandal for better understanding of the framework. For manual setup just download all startup files. Startup project has basic examples in Durandal and navigation setup that can be modified as per requirement. After downloading startup project, we will go step by step to create Durandal app.

Step-1 Create MVC Project

First we will create mvc4 internet application in visual studio named as DurandalApp.

New Project

You can also take empty MVC project but then you have to write your own controller and starting cshtml page.

Step-2 Folder Structure

Durandal follows the folder structure to create application so here is application organization:-

Durandal Folder Structure

I recommend creating “App” folder with in project as shown above. Durandal applications are built as a collection of AMD modules. In fact, Durandal itself is just a set of modules. Here is what each folder used for:-

“viewmodels folder”- contains application-specific code (js file)

“views folders” contains the application-specific views(html).

“main.js” contains all of the JavaScript startup code for your app including the route configuration, module configuration etc. your app execution always starts with main which is referenced by the RequireJS script tag in the index.html file (.cshtml for .NET).

After setting up this folder structure, copy Durandal library under Script folder of your MVC project (You can keep Durandal library anywhere as per requirement, I keep it in Script folder because it just like a 3rd party JavaScript library). Durandal library has all core modules.

Durandal library folder structure

Durandal Lib Folder Structure

We also need RequireJS and knockout js(Jquery is optional) so I have added two more folders under lib folder. These folders have respective js library.

Step-3 Index.cshtml

A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view(index.cshtml). For this sample I have created MVC HomeController.

namespace DurandalApp.Controllers{

public class HomeController : Controller{

public ActionResult Index(){

return View();

}

}

}

So it is just typical MVC code. I created the following server-side MVC view. This is the one-and-only server-side view used by the application.

Index.cshtml

<div id=”applicationHost”>

<div>

<div>

Sample App Durandal

</div>

<i></i>

</div>

</div>

<script type=”text/javascript” src=”../Scripts/lib/require/require.js” data-main=”/App/main”></script>

The “applicationHost” is where your app’s views will live. Below that is the script tag that references RequireJS. It points to our application’s entry point, declared in the data-main attribute. At runtime, this resolves to the main.js file.

In layout.cshtml we will setup css and js libraries. @RenderBody() has nothing special.it just renders the cshtml view.

<!DOCTYPE html>

<html lang=”en”>

<head>

<link rel=”stylesheet” href=”Content/bootstrap/css/bootstrap.css” />

<link rel=”stylesheet” href=”Content/bootstrap/css/bootstrap-responsive.css” />

<link rel=”stylesheet” href=”Content/font-awesome/css/font-awesome.css” />

<link rel=”stylesheet” href=”Scripts/lib/durandal/css/durandal.css” />

<link rel=”stylesheet” href=”Content/site.css” />

</head>

<body>

@RenderBody()

</body>

</html>

You can see that I have put all style sheets under content folder and JavaScript under Scripts folder. In Durandal app durandal.css is used to render message box and dialog box and other css are optional.

Step-4 main.js

In 4th step we will see entry point of app that is main.js. It is the first code that get executed and where you can configure Durandal setting and tells to start the application.

main.js

requirejs.config({

paths: {

‘text’: ‘../Scripts/lib/require/text’,

‘durandal’: ‘../Scripts/lib/durandal/js’,

‘plugins’: ‘../Scripts/lib/durandal/js/plugins’,

‘transitions’: ‘../Scripts/lib/durandal/js/transitions’,

‘knockout’: ‘../Scripts/lib/knockout/knockout-2.3.0’,

‘bootstrap’: ‘../Scripts/lib/bootstrap/js/bootstrap’,

‘jquery’: ‘../Scripts/lib/jquery/jquery-1.9.1’

}

});

define([‘durandal/system’, ‘durandal/app’, ‘durandal/viewLocator’], function (system, app, viewLocator) {

system.debug(true);

app.title = ‘Durandal Starter Kit’;

app.configurePlugins({

router: true,

dialog: true,

widget: true

});

app.start().then(function () {

viewLocator.useConvention();

app.setRoot(‘viewmodels/shell’, ‘entrance’);

});

});

This code shows main functionality but it can be different according to application. Here is the list of main task in main.js:-

  1. RequireJs: – Configuration:- first few lines of main js used for configuring the durandal module path.
  2. Debugging: – Durandal have debugging functionality which can be used by turning on the debugging.
  3. Title: – You can set your application title.
  4. ConfigurePlugins: – here you can tell which plugin will available to your app.
  5. app.start():- this is actually used to kick off the application and it returns a promise which resolved when DOM is ready and framework is prepared for configuration.
  6. UseConvention: – here we set up our viewLocator with basic conventions.
  7. app.setRoot():- This is what actually causes the DOM to be composed with your application. It points to your main view model (or view). When this is called, Durandal’s composition infrastructure is invoked causing RequireJS to require your root view model, use the viewLocator to locate its view, data-bind them together and inject them into the applicationHost element. Additionally, the ‘entrance’ transition animation is used to animate the app in.

Step-5 Shell

shell is just like master page of durandal app where put all static content which you want to remain constant throughout the durandal app. So this is great place to have navigation, header and footer etc.  So first we will create the shell.js in App/viewmodel and shell.html in App/views.

shell.js

define([‘plugins/router’, ‘durandal/app’], function (router, app) {

return {

router: router,

activate: function () {

router.map([

{ route: ”, title: ‘home’, moduleId: ‘viewmodels/home’, nav: true },

]).buildNavigationModel();

return router.activate();

}

};

});

Shell.html

<div>

<div>

<div>

<ul data-bind=”foreach: router.navigationModel”>

<li data-bind=”css: { active: isActive }”>

<a data-bind=”attr: { href: hash }, html: title”></a>

</li>

</ul>

</div>

</div>

<div data-bind=”router: { transition:’entrance’ }”></div>

</div>

When you call setRoot, Durandal requires both the module and the html and uses Knockout to data-bind them together. It then injects them into the DOM’s applicationHost. As we see in shell.js, router plugin is used for registering the route.then we have used router’s navigationModel to dynamically generate or link in shell.html.

Here in shell.html knockout is used for view composition i.e. router keep track of current route and when route is changed new view is composed according to new route. Here is how exactly it happens?

  1. A route is triggered and the router finds the corresponding module and sets it as active.
  2. The router binding detects that the active module has changed. It examines the value and uses that to find the appropriate view (using the viewLocator).
  3. The module and the located view are data-bound together.
  4. The bound view is inserted into the DOM at the location of the router binding.
  5. If the router binding specifies an animation, it is used to smoothly show the new view.

Step-5 View and Viewmodels (Final Step)

Each page in application is comprised of view and viewmodel. Once you’ve setup the application as described above you can extend the application by adding new view in views folder and viewmodel in viewmodels folder. Then you just register with router in shell.js and when you go to that route, router will locate viewmodel and compose the view(insert the view in DOM). For example I am taking new page home.html.

So first create test.html under views folder and with the same name create js file under viewmodels folder.

home.js

define(function (require) {

var app = require(‘durandal/app’);

return {

displayName: ‘home Page’,

showMessage: function () {

app.showMessage(‘This is my first home page!’);

}

};

});

home.html

<div>

<h2 data-bind=”html: displayName”></h2>

<button data-bind=”click: showMessage”>Click Me</button>

</div>

Finally, go to the shell.js module and add update the router’s mappings whenever you add new page. In our application we have already added router mapping for home page.

define([‘plugins/router’, ‘durandal/app’], function (router, app) {

return {

router: router,

activate: function () {

router.map([

{ route: ”, title: ‘home’, moduleId: ‘viewmodels/home’, nav: true }

]).buildNavigationModel();

return router.activate();

}

};

});

Note-when you add new page, for registering your route, update the route mapping under shell.js. route property will have different value according to route.

e.g. If you add two new page like about and contact then route.map will have :-

router.map([

{ route: ”, title: ‘home’, moduleId: ‘viewmodels/home’, nav: true },

{ route: ‘about’, title: ‘about’, moduleId: ‘viewmodels/about’, nav: true },

{ route: ‘contact’, title: ‘contact’, moduleId: ‘viewmodels/contact’, nav: true }

]).buildNavigationModel();

Now, run the application (make sure your browser isn’t caching resources) and you should see a new navigation option called ‘test’. Click on it and you will navigate to your new page. It’s that simple. I hope you like this blog post and will now start your journey of SPA-Single Page Applications.

Considerations for PCI-DSS Compliant Solution Development – Part 2

For earlier 9 points kindly refer to my earlier blog at Considerations for PCI-DSS Compliant Solution Development – Part 1

  1. Develop applications based on secure coding guidelines. Prevent common coding vulnerabilities in software development processes, to include the following:a. Documentation of impact: document the impact of change in code or customization of software.b. Documented change approval by authorized parties.c. Functionality testing to verify that the change does not adversely impact the security of the system.d. Back out Procedures
  2. Testing should be done to avoid any flaws like SQL injection. Also consider OS Command Injection, LDAP and XPath injection flaws, Buffer overflows, cross site scripting attacks and cross site request forgery (CSRF).
  3. Develop all web applications based on secure coding guidelines such as the Open Web Application Security Project guidelines. Review custom application code to identify coding vulnerabilities. Cover prevention of common coding vulnerabilities in software development processes, to include the following:
    • Un-validated input
    • Broken access control (for example, malicious use of user IDs)
    • Broken authentication and session management (use of account credentials and session cookies)
    • Cross-site scripting (XSS) attacks
    • Buffer overflows
    • Injection flaws (for example, structured query language (SQL) injection)
    • Improper error handling
    • Insecure storage (cryptographic or otherwise)
    • Denial of service
    • Security Misconfiguration
    • Insecure Direct Object References
    • Cross-Site Request Forgery (CSRF)
    • Failure to Restrict URL Access
    • Insufficient Transport Layer Protection
    • Unvalidated Redirects and Forwards
  4. SSL protects data that is transmitted between a browser and web server. It is critical that you have SSL enabled on the web server, and this should be among the first steps taken after installation.
    • Web server must be configured to use SSL v3 or TLS v1 protocols with strong encryption (128-bit or longer key is required)
    • Install SSL certificate issued for specified web domain.
  5. PCI compliance requires that you use unique user names and secure authentication to access any PCs, servers, and databases with payment applications and/or cardholder data. This means that you should use different user names/passwords:a. For your Web hosting account administration area (Web hosting account where your online store is hosted)b. For FTP access to the Web serverc. For Remote Desktop Connection to the Web server (if available)d. To connect to the MySQL server that contains your store data.
  6. Audit trails
    Audit trails/logs are should be automatically enabled with the default installation of software solution. There should be no option to disable audit logging.
    The following types of activity should be logged:a. All actions taken by any individual with root or administrative privilegesb. Initialization of the audit logsc. User sign in and sign out.Individual access to cardholder data is not logged, because cardholder data is not stored before and after authentication. Access to audit trails must be provided on the operating system level.Each log event includes:1. Type of event

    2. Date and time of event

    3. Username and IP address

    4. Success or failure indication

    5. Action which led to the event

    6. Component which led to the event

  7. Wireless communicationsa. If you use wireless networking to access software, it is your responsibility to ensure your wireless security con figuration follows the PCI DSS requirements.b. Personal firewall software should be installed on any mobile and employee-owned computers that have direct access to the internet and are also used to access your network.c. Change wireless vendor defaults, including but not limited to, wired equivalent privacy (WEP) keys, default service set identifier (SSID), passwords and SNMP community strings. Disable SSID broadcasts. Enable WiFi protected access (WPA and WPA2) technology for encryption and authentication when WPA-capable.d. Encrypt wireless transmissions by using WiFi protected access (WPA or WPA2) technology, IPSEC VPN, or SSL/TLS.e. Never rely exclusively on wired equivalent privacy (WEP) to protect confidentiality and access to a wireless LAN. If WEP is used, do the following:f. Use with a minimum 104-bit encryption key and 24 bit-initialization value

    g. Use ONLY in conjunction with WiFi protected access (WPA or WPA2) technology, VPN, or SSL/TLS

    h. Rotate shared WEP keys quarterly (or automatically if the technology permits)

    i. Rotate shared WEP keys whenever there are changes in personnel with access to keys

    j. Restrict access based on media access cod e (MAC) address.

    k. Install perimeter firewalls between any wireless networks and the cardholder data environment, and configure these firewalls to deny any traffic from the wireless environment or to control any traffic if it is necessary for business purposes.

  8. Remote access
    Software provides web-based access using two-factor authentication based on one-time PIN codes.a. If you enable remote access to your network and the cardholder data environment, you must implement two-factor authentication. Use technologies such as remote authentication and dial-in service (RADIUS) or terminal access controller access control system (TACACS) with tokens; or VPN (based on SSL/TLS or IPSEC) with individual certificates. You should make sure that any remote access software is securely configured by keeping in mind the following:b. Change default settings in the remote access software (for example, change default passwords and use unique passwords for each customer)c. Allow connections only from specific (known) IP/MAC addressesd. Use strong authentication or complex passwords for loginse. Enable encrypted data transmissionf. Enable account lockout after a certain number of failed login attempts

    g. Configure the system so a remote user must establish a Virtual Private Network (“VPN”) connection via a firewall before access is allowed

    h. Enable any logging or auditing functions

    i. Restrict access to customer passwords to authorized reseller/integrator personnel

    j. Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis (for example, online, archived, or restorable from backup).

Considerations for PCI-DSS Compliant Solution Development – Part 1

Following are the considerations for the development and Implementation of software solutions in a PCI-DSS Compliant Environment. These should be treated as functional and/or quality requirements while developing PCI DSS Compliant solution.

  1. Ensure that all system components and software are protected from known vulnerabilities by having the latest vendor supplied security patches installed. Install critical security patches within one month of release. This applies to all frameworks as well as operating systems and other software installed in production environment.
  2. The PCI-DSS requires that access to all systems in the payment processing environment be protected through use of unique users and complex passwords. Unique user accounts indicate that every account used is associated with an individual user with no use of generic group accounts used by more than one user. Additionally any default accounts provided with operating systems, databases and/or devices should be removed/disabled/renamed as soon as possible.E.g. Default administrator accounts include “administrator” (Windows systems), “sa” (SQL/MSDE), and “root” (UNIX/Linux) should be disabled or removed.The PCI-DSS standard requires the following password complexity for compliance (often referred to as using “strong passwords”):

    a. Passwords must be at least 7 characters

    b. Passwords must include both numeric and alphabetic characters

    c. Passwords must be changed at least every 90 days

    d. New passwords can’t be the same as the last 4 passwords

    The PCI-DSS user account requirements beyond uniqueness and password complexity are as follows:

    a. If an incorrect password is provided 6 times the account should be locked out

    b. Account lock out duration should be at least 30 min. (or until an administrator resets it)

    c. Sessions idle for more than 15 minutes should require re-entry of username and password to reactivate the session.

    d. Do not use group, shared or generic user accounts

     

  3. PCI DSS applies wherever account data is stored, processed or transmitted. The primary account number is the defining factor in the applicability of PCI DSS requirements. PCI DSS requirements are applicable if a primary account number (PAN) is stored, processed, or transmitted. If PAN is not stored, processed or transmitted, PCI DSS requirements do not apply.The following table illustrates commonly used elements of cardholder and sensitive authentication data, whether storage of each data element is permitted or prohibited, and whether each data element must be protected. This table is not exhaustive, but is presented to illustrate the different types of requirements that apply to each data element.

     

  4. Removal of custom application accounts, user IDs, and passwords before applications become active or are released to customers.
  5. Review of custom code prior to release to production or customers in order to identify any potential coding vulnerability.
  6. There should be separate development/test and production environments.
  7. Reduce the number of personnel with access to the production environment and cardholder data minimizes risk and helps ensure that access is limited to those individuals with a business need to know.
  8. Production data (live PANs) are not used for testing or development.
  9. Test data and accounts should be removed from production code before the application becomes active.

PCI Compliance Overview

 

PCI DSS version 2.0 must be adopted by all organizations with payment card data by 1 January 2011, and from 1 January 2012 all assessments must be against version 2.0 of the standard.

It specifies the 12 requirements for compliance, organized into six logically-related groups, which are called “control objectives”.

Control Objectives PCI DSS Requirements
Build and Maintain a Secure Network 1. Install and maintain a firewall configuration to protect cardholder data

2. Do not use vendor-supplied defaults for system passwords and other security parameters

Protect Cardholder Data 3. Protect stored cardholder data

4. Encrypt transmission of cardholder data across open, public networks

Maintain a Vulnerability Management Program 5. Use and regularly update anti-virus software on all systems commonly affected by malware

6. Develop and maintain secure systems and applications

Implement Strong Access Control Measures 7. Restrict access to cardholder data by business need-to-know

8. Assign a unique ID to each person with computer access

9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks 10. Track and monitor all access to network resources and cardholder data

11. Regularly test security systems and processes

Maintain an Information Security Policy 12. Maintain a policy that addresses information security

Eligibility for PA-DSS Validation:

Applications won’t be considered for PA-DSS Validation, if the ANY of the following point is “TRUE”:

  1. Application is released in beta version.
  2. Application handle cardholder data, but the application itself does not facilitate authorization or settlement.
  3. Application facilitates authorization or settlement, but has no access to cardholder data or sensitive authentication data.
  4. Application require source code customization or significant configuration by the customer (as opposed to being sold and installed “off the shelf”) such that the changes impact one or more PA-DSS requirements.
  5. Application a back-office system that stores cardholder data but does not facilitate authorization or settlement of credit card transactions. For example:
    • Reporting and CRM
    • Rewards or fraud scoring
  6. The application developed in-house and only used by the company that developed the application.
  7. The application developed and sold to a single customer for the sole use of that customer.
  8. The application function as a shared library (such as a DLL) that must be implemented with another software component in order to function, but that is not bundled (that is, sold, licensed and/or distributed as a single package) with the supporting software components.
  9. The application function as a shared library (such as a DLL) that must be implemented with another software component in order to function, but that is not bundled (that is, sold, licensed and/or distributed as a single package) with the supporting software components.
  10. The application a single module that is not submitted as part of a suite, and that does not facilitate authorization or settlement on its own.
  11. The application offered only as software as a service (SAAS) that is not sold, distributed, or licensed to third parties.
  12. The application an operating system, database or platform; even one that may store, process, or transmit cardholder data.
  13. The application operates on any consumer electronic handheld device (e.g., smart phone, tablet or PDA) that is not solely dedicated to payment acceptance for transaction processing.

For custom software development projects, “Requirement 6: Develop and maintain secure systems and applications” section is more applicable and needs to be taken care while doing the system design and in coding.

PCI Compliance Introduction

The Payment Card Industry (PCI) has developed security standards for handling cardholder information in a published standard called the PCI-DSS Data Security Standard (DSS). The security requirements defined in the DSS apply to all members, merchants, and service providers that store, process or transmit cardholder data.

The PCI-DSS requirements apply to all system components within the payment application environment which is defined as any network device, host, or application included in, or connected to, a network segment where cardholder data is stored, processed or transmitted.

The purpose of this document is to guide help software development of project which require PCI-DSS compliance implementation.

This document also explains the Payment Card Industry (PCI) initiative and the Payment Application Data Security Standard (PA-DSS) guidelines. The document then provides specific installation, configuration, and on-going management best practices for PA-DSS Certified application operating in a PCI-DSS compliant environment.

Difference between PCI-DSS Compliance and PA-DSS Validation:

As a software vendor, our responsibility is to ensure that our solution does conform to industry best practices when handling, managing and storing payment related information.

PA-DSS is the standard against which Solutions has been tested, assessed, and certified.

PCI-DSS Compliance is then later obtained by the merchant, and is an assessment of end-client’s actual server (or hosting) environment.

Obtaining “PCI-DSS Compliance” is the responsibility of the merchant and client’s hosting provider, working together, using PCI-DSS compliant server architecture with proper hardware & software configurations and access control procedures.

The PA-DSS Certification is intended to ensure that the solutions will help you achieve and maintain PCI-DSS Compliance with respect to how solutions handles user accounts, passwords, encryption, and other payment data related information.

PCI Security Standards Council Reference Documents:

The following documents provide additional detail surrounding the PCI SSC and related security programs (PA-DSS, PCI-DSS)

Introduction To Specification Pattern

Specification pattern has emerged out of “Domain Driven Design” phenomenon. It is first identified and articulated here by Eric Evans and Martin Fowler. Specification Pattern is based on method chaining technique and beautifully uses fluent interfaces. In method chaining, we get desired results by operating over the object through series of methods which returns the same type of object. One of the examples of method chaining is given below,

var Television = televisionFactory.New()
    .SetColor("blue")
    .SetHeight(1)
    .SetLength(2);

With the help of specification pattern, business logic or business rules management inside the application can be simplified and code becomes more readable and you can remove those ugly “if else” ladders. Specification immensely helps out to refactor the existing code so as to consolidate the business rules of the system. More benefits of specifications are; you can change the business rule either in terms of hard values or also in terms of business rule composition itself.

Let’s take a following code sample inside order class, which is pretty common use

public bool HasDiscountWithoutSpecification()
{
	if (SKUs >= 10000 && ModeOfPayment == PaymentMode.CashOnDelivery && ShippingAddress.Country == "USA" && OrderAmount >= 50000 && IsTaxApplicable == false)
	{
		AllowDiscount = true;
	}

	return AllowDiscount;
}

With Specification pattern we can turn above code into something like

public bool HasDiscountWithSpecification()
{
    var spec = new DiscountSpecification();
    return spec.IsSatisfiedBy(this);
}

With Specification pattern while we are dealing with flow of information business rule complexity is abstracted and code looks pretty simple.

If you explore the code provided with this post (source code pointer at the end of the post) further, you will see the beauty of construction of specification named “DiscountSpecification”. Discount Specification has method called SatisfiedBy method which utilizes other specifications and builds the business rule whether on specified order, discount is applicable or not. Since Discount specification is built on other specifications it’s a “Composite specification”. Sometimes, this terminology is used to indicate the construction of specifications.

Here is the code for IsSatisfiedBy method.

public override bool IsSatisfiedBy(Order candidate)
{
	return domesticOrderSpec
		.And(highValueSpec)
		.And(highStockSpec)
		.And(taxableSpec.Not())
		.IsSatisfiedBy(candidate);
}

The beauty above code is, even in future, if business rules for discount applicability changes, we just need to change the IsSatisfiedBy method in DiscountSpecification and we are done. For e.g. If business has removed Stock requirements of having 1000 SKUs. Then we just need to remove the highstockSpec AND operation”. So the code after High stock rule is removed will be simply

public override bool IsSatisfiedBy(Order candidate)
{
	return domesticOrderSpec
		.And(highValueSpec)
		//.And(highStockSpec)
		.And(taxableSpec.Not())
		.IsSatisfiedBy(candidate);
}

And this business rule change will be reflected across the application. Now, consider that business changed the rule and now discount will be only given to orders having order total more than 50000 USD. Then, we just need to reflect this change inside our HighValueSpecification class.

In similar fashion, we can also add new Specifications to satisfy the new business rules.

The useful scenarios where specification pattern can be used inside the application are as follows:

  1. While Applying Filtering/Search Criteria
  2. Extraction of Business Rules from code
  3. Handling Error logs
  4. Carrying out Unit Testing
  5. Component/specific object selection
  6. Building out some complex Parsing logic

You can find the Code sample for this post on github here. I have not included tests with this sample. The code sample also contains a small specification framework that you can reuse. Please note that, this code is not a production code and posted on github for the explanatory purpose.

(This post was originally posted at http://www.e-zest.net/blog)