Powershell based HDinsight blogs from Elastacloud

You can use Hive Parameters with HDInsight through Powershell: http://andyelastacloud.azurewebsites.net/?p=1532

You can customise even transient HDInsight clusters with Powershell: http://andyelastacloud.azurewebsites.net/?p=1482


Andy at WindowsAzureConf

Windows Azure Conf


Speaking at Windows Azure Conf

Check out http://www.windowsazureconf.com!  I (Andy) will be speaking at this great conference on a topic that’s very close to my heart – the integration of devices and the openness of Windows Azure. It’ll be a fun session that I’m giving from Redmond, so hopefully the jetlag will be bearable! 🙂

What is WindowsAzureConf ?

On November 14, 2012, Microsoft will be hosting Windows AzureConf, a free event for the Windows Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. Streamed live for an online audience on Channel 9, the event will allow you to see how developers just like you are using Windows Azure to develop applications on the best cloud platform in the industry. Community members from all over the world will join Scott in the Channel 9 studios to present their own inventions and experiences. Whether you’re just learning Windows Azure or you’ve already achieved success on the platform, you won’t want to miss this special event.

How do I sign up?

Find out more and sign up here http://windowsazureconf.net.

Agile path ways into the Azure Universe – Configuring the Azure Emulator to work as part of our specification fixtures

Reader Notes

This article is pitched at a highly technical audience who are already working with Azure , StoryQ and potentially Selenium / Web Driver. This article primarily builds on a previous Article we wrote in this series,  in which we explain all the frameworks listed above. If they are unfamiliar to you we would suggest reading through this article first:

Agile path ways into the Azure Universe – Access Control Service [ACS] [http://blog.elastacloud.com/2012/09/23/agile-path-ways-into-the-azure-universe-access-control-service-acs/]

For those who are completely new to  test driven concepts , we might also suggest reading through the following article as an overview to some of the concepts presented in this series.

Step By Step Guides -Getting started with White Box Behaviour Driven Development [http://blog.elastacloud.com/2012/08/21/step-by-step-guides-getting-started-with-specification-driven-development-sdd/]


In this article we will focus on building a base class which will allow the consumer to produce atomic, and repeatable Microsoft Azure based tests which can be run on the local machine.

The proposition is that, given a correctly installed machine with the right set of user permissions and configuration we can check out fresh source from our source control repository and execute a set of tests to ensure the source is healthy. We can then add to these tests and drive out further functionality safely in our local environment prior to publication to the Microsoft Azure.

The one caveat to this proposition is that due to the nature of the Microsoft Azure Cloud based services, there is only so much we can do in our local environment before we need to provide our tests with a connection to Azure assets (such as ACS [Access Control Service] , Service Bus). It should be noted that Various design patterns outside the scope of this article can substitute for some of these elements and provide some fidelity with the live environment. The decision on which route to take on these issues is project specific and will be covered in further articles in coming months.


Our own development setup is as follows:

·Windows 8 http://windows.microsoft.com/en-US/windows-8/release-preview

· Visual Studio 2012 http://www.microsoft.com/visualstudio/11/en-us

· Resharper 7 http://www.jetbrains.com/resharper/whatsnew/index.html

· NUnit http://nuget.org/packages/nunit

· NUnit Fluent Assertions http://fluentassertions.codeplex.com/

– Azure SDK [Currently 1.7] http://www.windowsazure.com/en-us/develop/net/

We find that the above combination of software packages makes for an exceptional development environment. Windows 8 is by far the most productive Operating System we have used across any hardware stack. Jet Brains Resharper has become an indispensable tool, without which Visual Studio feels highly limited. NUnit is our preferred testing framework, however you could use MBUnit or XUnit. For those who must stick with a pure Microsoft ALM experience you could also use MSTest.

Azure Emulator

The Microsoft Azure toolset includes the Azure Emulator , this tool attempts to offer a semi faithful experience in the local development environment of a deployed application scenario on windows Azure. This is achieved by the emulation of Storage and Compute on the local system. Unfortunately, due to the connected nature of the Azure platform in particular the service bus element , the Emulators ability is some what limited. In the Test Driven world a number of these limitations can be worked around by running in a semi-connected mode (where your tests still have a dependency on Microsoft Azure and a requirement to be connected to the internet )  for the essentials that cannot be emulated locally.

With forward thinking , good design and Mocking / Faking frameworks it is possible to stimulate the behaviour of the Microsoft Azure connected elements. In this scenario every decision is a compromise and there is no Right or Wrong answer, just the right answer at that time for that project and that team.

Even with the above limitations the Emulator is a powerful development tool. It can work with either a local install of IIS or IIS express. In the following example we will pair the emulator with IISExpress. We firmly believe in reducing the number of statically configured elements a developer has to have to run a fresh check out from source control of a given code base.

Task 1 – Configure the emulator arguments in the app.config file

The first task is to set up some configuration entry’s to allow the framework to run the emulator in a process , these arguments define:

  • Path to the Azure Emulator
  • Path to the application package directory
  • Path to the application service configuration file

The first step is to add a app.config file to our test project


Note – a relative root can be configured for the CSX and the Service Configuration file , in this example to keep things explicit we have not done this.


    The first argument we need to configure is the path to the emulator, on our machine using SDK 1.7 this is configured as follows :



The second argument we need to configure is the path to the package, for our solution this looks like:



The third argument we need to set up is the path to the services configuration file. For our solution this looks like this:


[Use IISExpress switch]

The  final argument we need is to tell the emulator to use IISExpress as its web server:


The final Configuration group:


Task 2 – We build the process command line

Now we have the argument information captured in the application configuration file we need to build this information into an argument string. We have done this in our Test / Specification base:


In the TestFixtureSetup [called by child test classes]

  • Declare the arguments as variables
  • We assign the values  from the applications configuration file to our new variables
  • We then build our argument string and assign it to _computeArgs

Task 3 – We setup the emulator and execute it in a process

Now we have all the information we need to pass to our process to execute the emulator; our next stage is to start a process and host the emulator using the arguments we have just defined.


The code is relativity trivial

  • Spin up a process inside  a using block
  • Pass in the emulator arguments
  • Wait for the process to finish
  • Report on the output
  • The Should().Be() is making use of Fluent Assertions http://fluentassertions.codeplex.com/

Task 4 – Add code to our roles to make them publish a azure package when they build

Since Azure SDK 1.4 , Role projects have not automatically created package contents for azure. We need this to happen so that we can use a small code fragment which we add to the azure project file.


Reference Posts


Task 5 – We  now execute our tests

Now with the emulator + iisexpress running we are free to execute our tests / specifications from our child test / specifications fixtures.

Step 1 the Emulator starts



Step 2 IISExpress hosts the site




Step 3 [Optional] Selenium + web driver open browser and run tests




Task 6 – Shutting down the emulator

The emulator and the service can be quite slow in ending; it is possible to use the arguments in the following article to remove the package and close the emulator down. However we have encountered issues with this so instead we prefer to kill the process. We suggest you find out which one works of these approaches works best for your situation. Reference Article http://msdn.microsoft.com/en-us/library/windowsazure/gg433001.aspx Our code



Points of note

You must configure azure to use IISExpress at a project level The emulator does not like being shut down manually half way through a test run. If you have to do this make sure you end the service process also. This is an advanced technique with a number of moving parts, as such it is advised that you should pactise it on a piece of spike code before using it on a production project.
This technique is limited by the constraints of the SDK and the Emulator

Recap –What have we just done

  • We have configured the information required by the process and the emulator in an application configuration file
  • We have run up a process which starts the emulator + iisexpress
  • We have configured publish for emulator in the azure config
  • We have run our tests / specifications against the azure emulator
  • We have shutdown both iisexpress and the emulator
  • We are now green and complete.


This has been quite an advanced topic , to define these steps has been a journey of some research and experimentation that we hope to save the reader.

The purpose of this technique is to try to empower the principal that a developer should be able to simply check out source on a standard development configured machine and the tests should just run. They should be atomic , include their own setup and teardown logic and when they are finished they should leave no foot print. There is no reason we should break this principal just because we are coding against the cloud.

It is also important that tests are repeatable and have no side effects. Again the use of techniques such as those demonstrated in this short article help to empower this principal.

Happy Test driving


Twitter  @martindotnet

Elastacloud Community Team

Linq to Azure using Fluent Management

It’s been a while since I’ve even looked at Fluent Management but I wanted to add some things to it over the last couple of months and HPC, Big Data and general consulting and product development has kept me too busy as well as the continual release cycle of the Azure product teams who are putting out some incredible things and keeping me learning on a daily basis!

Anyway, a couple of things I’ve wanted to add to fluent management since June are OPC packaging deconstruction so that the reliance on msbuild can be phased out and there can be a clean addition of SSL certificates and VM Size updates without the rebuild. Obviously, like many things and many of you, I’m half way through this. It will get finished hopefully in time for the next Azure SDK release which is, I hope, not too far away.

In the interim I decided to write a Linq provider given that we’re beginning to use more queries against Azure in our own applications and this is much better mechanism to query data than the Fluent API. At the moment it only supports queries to storage services but I’ll do a drop which extends this to Cloud Services.

This weekend I’m opening up the source and pushing to Github. It’s been on Bitbucket since its inception but not a lot of people use Mercurial anymore so I thought we’d move with the times. It would be good to get some community involvement with the development of Fluent Management anyway. Some people in our UK community have offered and their support which will be most welcome!

Here is a small snippet of code which will allow you to pull back a named storage account with keys. This week I’ll do another drop to add storage account metadata to the storage structure so that there is everything you need to know returning from a single filtered query.

var inputs = new LinqToAzureInputs()
   ManagementCertificateThumbprint = TestConstants.LwaugThumbprint,
   SubscriptionId = TestConstants.LwaugSubscriptionId
var queryableStorage = new LinqToAzureOrderedQueryable<StorageAccount>(inputs);

var query = from account in queryableStorage
            where account.Name == "<my storage account>"
            select account;

Assert.AreEqual(1, query.Count());
var storageAccount = query.FirstOrDefault();

Happy trails. I’ll publish the repo url in he next couple of days for anyone interested.

A Windows Azure Service Management Client in Java

Recently I worked on a Service Management problem with my friend and colleague Wenming Ye from Microsoft. We looked at how easy it was to construct a Java Service Management API example. The criteria for us was fairly simple.

  • Extract a Certificate from a .publishsettings file
  • Use the certificate in a temporary call or context (session) and then delete when the call is finished
  • Extract the relevant subscription information from .publishsettings file too
  • Make a call to the WASM API to check whether a cloud service name is available

Simple right? Wrong!

Although I’ve been a Microsoft developer for all of my career I have written a few projects in Java in the past so am fairly familiar with Java Enterprise Edition. I even started a masters in Internet Technology (University of Greenwich) which revolved around Java 1.0 in 1996/97! In fact I’m applying my Java experience to Hadoop now which is quite refreshing!

I used to do a lot of Crypto work and was very familiar with the excellent Java provider model established in the JCE. As such I thought that the certificate management would be fairly trivial. It is but there are a a few gotchas. We’ll go over them now so that we can hit the problem and resolution before we hit the code.

The Sun Crypto provider has a problem with loading a PKCS#12 struct which contains the private key and associated certificate. In C# System.Cryptography is specifically built for this extraction and there is fairly easy and fluent way of importing the private key and certificate into the Personal store. Java has a keystore file which can act like the certificate store so in theory the PKCS#12/PFX (not the same exactly but for the purposes of this article they are) resident in the .publishsettings can be imported into the keystore.

In practice the Sun Provider doesn’t support unpassworded imports so this will always fail. If anybody has read my blog posts in the past you will know that I am a big fan of BouncyCastle (BC) and have used it in the past with Java (it’s original incarnation). Swapping the BC provider in place of the Sun one fixes this problem.

Let’s look at the code. To being we import the following:

import java.io.*;
import java.net.URL;

import javax.net.ssl.*;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;

import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.util.encoders.Base64;
import org.w3c.dom.*;
import org.xml.sax.SAXException;

import java.security.*;

The BC import is necessary and the Xml namespaces used to parse the .publishsettings file.

We need to declare the following variables to hold details of the Service Management call and the keystore file details:

// holds the name of the store which will be used to build the output
private String outStore;
// holds the name of the publishSettingsFile
private String publishSettingsFile;
// The value of the subscription id that is being used
private String subscriptionId;
// the name of the cloud service to check for
private String name;

We’ll start by looking at the on-the-fly creation of the Java Keystore. Here we get a Base64 encoded certificate and after adding the BC provider and getting an instance of a PKCS#12 keystore we setup an empty store. When this is done we can decode the PKCS#12 structure into a byte input stream, add to the store (with an empty password) and write the store out, again with an empty password to a keystore file.

/* Used to create the PKCS#12 store - important to note that the store is created on the fly so is in fact passwordless - 
* the JSSE fails with masqueraded exceptions so the BC provider is used instead - since the PKCS#12 import structure does 
* not have a password it has to be done this way otherwise BC can be used to load the cert into a keystore in advance and 
* password*/
private KeyStore createKeyStorePKCS12(String base64Certificate) throws Exception	{
	Security.addProvider(new BouncyCastleProvider());
	KeyStore store = KeyStore.getInstance("PKCS12", BouncyCastleProvider.PROVIDER_NAME);
	store.load(null, null);

	// read in the value of the base 64 cert without a password (PBE can be applied afterwards if this is needed
	InputStream sslInputStream = new ByteArrayInputStream(Base64.decode(base64Certificate));
	store.load(sslInputStream, "".toCharArray());

	// we need to a create a physical keystore as well here
	OutputStream out = new FileOutputStream(getOutStore());
        store.store(out, "".toCharArray());
        return store;

Of course, in Java, you have to do more work to set the connection up in the first place. Remember the private key is used to sign messages. Your Windows Azure subscription has a copy of the certificate so can verify each request. This is done at the Transport level and TLS handles the loading of the client certificate so when you set up a connection on the fly you have to attach the keystore to the SSL connection as below.

/* Used to get an SSL factory from the keystore on the fly - this is then used in the
* request to the service management which will match the .publishsettings imported
* certificate */
private SSLSocketFactory getFactory(String base64Certificate) throws Exception	{			KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("SunX509");
	KeyStore keyStore = createKeyStorePKCS12(base64Certificate);

	// gets the TLS context so that it can use client certs attached to the
	SSLContext context = SSLContext.getInstance("TLS");
	keyManagerFactory.init(keyStore, "".toCharArray());
	context.init(keyManagerFactory.getKeyManagers(), null, null);

	return context.getSocketFactory();

The main method looks like this. It should be fairly familiarly to those of you that have been working with the WASM API for a while. we load and parse the XML, add the required headers to the request, send it and parse the response.

ServiceManager manager = new ServiceManager();
	try {
		// Step 1: Read in the .publishsettings file 
		File file = new File(manager.getPublishSettingsFile());
		DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
		DocumentBuilder db = dbf.newDocumentBuilder();
		Document doc = db.parse(file);
		// Step 2: Get the PublishProfile 
		NodeList ndPublishProfile = doc.getElementsByTagName("PublishProfile");
		Element publishProfileElement = (Element) ndPublishProfile.item(0);
		// Step 3: Get the PublishProfile 
		String certificate = publishProfileElement.getAttribute("ManagementCertificate");
		System.out.println("Base 64 cert value: " + certificate);
		// Step 4: Load certificate into keystore 
		SSLSocketFactory factory = manager.getFactory(certificate);
		// Step 5: Make HTTP request - https://management.core.windows.net/[subscriptionid]/services/hostedservices/operations/isavailable/javacloudservicetest 
		URL url = new URL("https://management.core.windows.net/" + manager.getSubscriptionId() + "/services/hostedservices/operations/isavailable/" + manager.getName());
		System.out.println("Service Management request: " + url.toString());
		HttpsURLConnection connection = (HttpsURLConnection)url.openConnection();
		// Step 6: Add certificate to request 
		// Step 7: Generate response 
		connection.setRequestProperty("x-ms-version", "2012-03-01");
		int responseCode = connection.getResponseCode();
		// response code should be a 200 OK - other likely code is a 403 forbidden if the certificate has not been added to the subscription for any reason 
		InputStream responseStream = null;
		if(responseCode == 200) {
			responseStream = connection.getInputStream();
		}  else  {
			responseStream = connection.getErrorStream();
		BufferedReader buffer = new BufferedReader(new InputStreamReader(responseStream));
		// response will come back on a single line
		String inputLine = buffer.readLine();
		// get the availability flag
		boolean availability = manager.parseAvailablilityResponse(inputLine);
		System.out.println("The name " + manager.getName() + " is available: " + availability);
	catch(Exception ex) {
	finally	{

For completeness, in case anybody wants to try this sampe out here is the rest.

/* <AvailabilityResponse xmlns="http://schemas.microsoft.com/windowsazure"
* xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
* <Result>true</Result>
* </AvailabilityResponse>
* Parses the value of the result from the returning XML*/
private boolean parseAvailablilityResponse(String response) throws ParserConfigurationException, SAXException, IOException {
	DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
	DocumentBuilder db = dbf.newDocumentBuilder();

	// read this into an input stream first and then load into xml document
	StringBufferInputStream stream = new StringBufferInputStream(response);
	Document doc = db.parse(stream);
	// pull the value from the Result and get the text content
	NodeList nodeResult = doc.getElementsByTagName("Result");
	Element elementResult = (Element) nodeResult.item(0);
	// use the text value to return a boolean value
	return Boolean.parseBoolean(elementResult.getTextContent());

// Parses the string arguments into the class to set the details for the request
private void parseArgs(String args[]) throws Exception {
	String usage = "Usage: ServiceManager -ps [.publishsettings file] -store [out file store] -subscription [subscription id] -name [name]";
	if(args.length != 8)
		throw new Exception("Invalid number of arguments:\n" + usage);
	for(int i = 0; i < args.length; i++)	{
		switch(args[i])		{
			case "-store":
			case "-ps":
			case "-subscription":
			case "-name":
	// make sure that all of the details are present before we begin the request
	if(getOutStore() == null || getPublishSettingsFile() == null || getSubscriptionId() == null || getName() == null)
		throw new Exception("Missing values\n" + usage);

// gets the name of the java keystore
public String getOutStore() {
	return outStore;

// sets the name of the java keystore
public void setOutStore(String outStore) {
	this.outStore = outStore;

// gets the name of the publishsettings file
public String getPublishSettingsFile() {
	return publishSettingsFile;

// sets the name of the java publishsettings file
public void setPublishSettingsFile(String publishSettingsFile) {
	this.publishSettingsFile = publishSettingsFile;

// get the value of the subscription id
public String getSubscriptionId() {
	return subscriptionId;

// sets the value of the subscription id
public void setSubscriptionId(String subscriptionId) {
	this.subscriptionId = subscriptionId;

// get the value of the subscription id
public String getName() {
	return name;

// sets the value of the subscription id
public void setName(String name) {
	this.name = name;

// deletes the outstore keystore when it has finished with it
private void deleteOutStoreFile()	{
	// the file will exist if we reach this point
	try	{
		java.io.File file = new java.io.File(getOutStore());
	catch(Exception ex){}

Last thing to say. Head to the BouncyCastle website to download the package and provider and in this implementation Wenming and I called the class ServiceManager.

Looking to the future and when I’ve got some time I may look at porting Fluent Management to Java using this technique. As it stands I feel this is a better technique than using the certificate store to manage the keys and underlying collection in that you don’t need elevated privileges to interact. A consequence which has proved a little difficult to work with when I’ve been using locked down clients in the workplace or Windows 8.

I’ve been working on both Windows Azure Active Directory recently and the new OPC package format with Fluent Management so expect some more blog posts shortly. Happy trails etc.

Agile path ways into the Azure Universe – Access Control Service [ACS]

Preparation Tasks and Concepts

The main content of this article depends upon an Azure Service called ‘Access Control Service’. To make use of this service, you’ll need to sign up for an Azure Account. At the time of writing this article there is a 90 day free trial available.


You can find the portal entry page and sign up page at this web address : [ http://www.windowsazure.com/en-us/ ]

The wizard will take you through the sign up process, including signing up for a live account if you do not have one currently.

What is Azure?

Azure is Microsoft cloud offering , in very simple terms Microsoft provide huge containers full of hardware across the world and from these hardware clusters, Microsoft offer you the ability to:

  • Upload and consume Virtual Machines
  • Upload your own websites, applications and services
  • Consume specialist services for example Access Control Service [ACS]
  • Distribute your content worldwide via the Content Delivery Network [CDN]
  • Make use of Object based data storage using a variant of  NoSQL concept called Table Storage
  • Online SQL server instances , known as Azure SQL
  • Distributed messaging and workflow via powerful and custom service bus offering.

In simple terms, Azure gives you the power to focus on writing world class software whilst it handles the hardware and costing concerns. All this power comes with a price tag, but when you take into account the cost of physical data canters Azure is more than competitive. In this article we will only utilise a tiny fraction of the power of Azure, but we would encourage you to explore it in more depth if you haven’t already.

What Is the Access Control Service [ ACS ] ?

Web Address – [ http://msdn.microsoft.com/en-us/library/windowsazure/gg429786.aspx ]

The Access Control Service offered by Microsoft via the Azure platform is a broker for Single Sign On Solutions. In simple terms this provides the capability for users to Authenticate to use your application. This authentication uses commercial and trusted Identity Providers such as:

  • Google
  • Windows Live Id
  • A custom identity provider or a corporate Active Directory.
    Once a user has authenticated they are issued with a token. We can receive and use this token as a unique identifier for the user inside the target software system.
    This is as far as we will go to explain ACS, there is a lot of high quality material available on the internet that covers the basics of getting up and running with ACS . We have included a number of in depth links below, but the rest of this article will focus on Test driving ACS.

Adding Internet Identity Providers like Facebook, Google, LiveID and Yahoo to your MVC web application using Windows Azure AppFabric Access Control Service and jQuery in 3 steps


How to Authenticate Web Users with Windows Azure Access Control Service


Re-Introducing the Windows Azure Access Control Service


Behaviour Driven Development and black box testing – Concepts

Behaviour Driven Development or BDD has been in common use for a number of years, and there are several  flavours available in a number of frameworks. The basis of BDD is that we execute living requirements (or in Agile speak, Stories) which have been written in a English readable representation of the required functionality.

Some frameworks take a Domain Specific Language (DSL) approach to defining the requirements that the BDD tests will execute, a common standard is called Gherkin.

DSL  : [ http://www.martinfowler.com/bliki/BusinessReadableDSL.html ]

Gherkin : [ http://www.ryanlanciaux.com/2011/08/14/gherkin-style-bdd-testing-in-net/ ]

The framework we will be using in this article is called StoryQ ([ http://storyq.codeplex.com/ ]). We have chosen this framework due to its simplistic approach to specification design, and more importantly its support for coded specifications. To be specific; we write the specification in code and therefore they become part of our living code base.

Driving the Browser

Although StoryQ provides a nice step by step format and structure to our Stories, it still leaves the problem that the requirements are written from a high level perspective.

As developers, we could implement a White box style approach where by we do not respect the outer borders of the application. Our Tests would then be allowed to interact with the code, and even substitute parts of the code for Testing objects such as Mocks. We speak about this approach in detail in this article.

[ http://blog.elastacloud.com/2012/08/21/step-by-step-guides-getting-started-with-specification-driven-development-sdd/ ] .

In this case we have elected to respect the boundaries of the application. We will treat the application as if it is within a black box, into which we can not see or interfere but with who’s public interface (in this case a web page) we can interact.

Having made this decision we now need to find a framework to work with StoryQ. We need to allow our tests to drive a browser and cause it to replicate a users interaction. There are a number of frameworks available , in this case we select the excellent Selenium  + Web driver packages from Nuget.

[ http://www.nuget.org/packages/Selenium.WebDriver ].

The tools we are using

Our own development setup is as follows:

·Windows 8 http://windows.microsoft.com/en-US/windows-8/release-preview

· Visual Studio 2012 http://www.microsoft.com/visualstudio/11/en-us

· Resharper 7 http://www.jetbrains.com/resharper/whatsnew/index.html

· NUnit http://nuget.org/packages/nunit

· NUnit Fluent Extensions http://fluentassertions.codeplex.com/

– Selenium + Web Driver http://www.nuget.org/packages/Selenium.WebDriver

We find that the above combination of software packages makes for an exceptional development environment. Windows 8 is by far the most productive Operating System we have used across any hardware stack. Jet Brains Resharper has become an indispensable tool, without which Visual Studio feels highly limited. NUnit is our preferred testing framework, however you could use MBUnit or XUnit. For those who must stick with a pure Microsoft ALM experience you could also use MSTest.

What are we trying to achieve

In the rest of this article we will demonstrate using Selenium + Web Driver to empower our StoryQ tests. We will show how to represent a User logging into Azure [ACS] prior to them accessing our site.

We will show illustrations of the following

  • Web Driver
  • StoryQ
  • Azure Access Control Service
  • Emergent Design
  • NUnit
  • Unit Testing a controller action
  • Selenium Selectors
  • Identity and Access Tool

Test Driven

As Test Driven Developers we start with a requirement and a Test –

As A User who is not logged in

When I try to access the site

Then I expect to be taken to a login screen

Step 1 : Is to add a Test Assembly this is just a standard class library project.


Step 2 : Add Nuget references for –

    • Nunit
    • StoryQ
    • Fluent Assertions


Step 3 : Write a Story –


Note on Story Planning

The Story above reflects the spirit of the original requirement, but actually attacks them in a very pragmatic way. This will not appeal to purist BDD folks, but it is an approach we have found to be very effective when analysing requirements from Business. We often find that some of the set up and Administrator stories have been missed from the planning, and that these are required to empower the pure business Story’s.

We can also see here that the motivation for the ACS integration has changed to the Administrator. This would of typically come out of a conversation with a Business Owner over who this Story benefits and which role would be motivated by the security of the system.

The business would typically readjust their Story pallet to include Stories relevant to User profile and data security that arise from discussion with the business owner. This allows us to refine the Stories provided to the developers and better reflect the businesses intentions. The above situation illustrates the cooperative requirement management process with the Agile space. It is a good example of iterative requirement planning as a requirement passes through different stages of planning.

If we execute a dry run of this story –


we see the following output –

Story is We are forced to login to ACS when trying to access the site
In order to keep the web site secure => (#001)
As a Administrator
I want users to have to authenticate via ACS prior to entry to the web site

With scenario Happy Path

Step 4 : Write the scenario –


Step 5 :Generate Step Stubs


Recap – What we have done so far

We have taken the following steps –

  • Set up a environment
  • Built a class library project
  • Set up a story
  • Set up a scenario
  • Stubbed out the steps required by the scenario

If we dry run the story now we see the following output –


Lets take a closer look


We can immediately see what we need to do to make this test pass, and in a very human readable output. This is one of the amazing and empowering features of StoryQ; it’s ability to bridge the gap between technical and business, via its clear and precise output formats.

Note – this is still a bit of a technical story, terms such as ACS would probably need to be refactored or added to a product definition dictionary, for the sake of this article we have kept this wording in.

Implementing the Functionality and making the story pass.

Next we start to use the test steps defined in the Story to drive out our functional foot print. In the following sections we will start to make formative steps to construct the application. The ACS bridge being built through the Azure side of the configuration will not be covered, as this has been covered in depth by the links supplied in earlier sections of this article.

Steps illustration

Now lets get started with the implementation.

Step 1 : That I am Not Logged In

This step forces us to use Nuget to bring in the webdriver and selenium implementation we referred to earlier. The main reason we are driven to do this now is because we want to be sure we have no ACS cookies registered with the browser. To delete any cookies that are present we need to make a call to the browser. This also gives us our perfect motivation to bring down a browser automation tool kit.


With the Webdriver in place we can now instantiate it and make sure we have cleared down any cookies and are working with a fresh profile. There are drivers available for Firefox, Internet Explorer and Chrome. In our case we have used Firefox.


With the Driver in place and instantiated we can now make sure it is clear of cookies.


We are being explicit here despite the fact we have just created a new Profile. This is because the step cannot be dependant upon the set code, the driver or profile to remain unchanging.  We therefor include a deliberate step to delete all cookies and make sure we are working with a clean browser.

Step 2 : I Try To Go To The Home Page

In this step we will see a browser window open up and try to browse to the site. To enable this we are driven to add the following elements –

  • App.Config
  • ASP MVC Application
  • Home Controller
  • Index Action
    When we can see that the browser launched by the framework automatically navigates to the home page then we know this step is complete.
The code to drive the browser –


    The settings for the test runner, held in a app.config file for convenience.


We now add a ASP MVC 4 project


When we run the story we can see Firefox open and try to navigate to the Home URL, which fails.


We will now add a Unit test to drive out the Home controller and index action.


From this test we drive out the view and the controller, we have added them below for completeness.





The controller and view are skeletal objects, we only implement what we are driven to add by our tests.

We now rerun our test and find that we are green. We are ready to continue our journey.


With the new controller and view in place we will re-run our Step and see if we can reach the site. Unfortunately we find we still fail – this is because the default is for visual studio to use the development server. We can work with this, but when working on the types of sites that we will be running via the Azure Emulator, we prefer to work directly with a IIS or IIS express. Eventually, as mentioned above, we would more than likely be driven by our none functional requirements to implement the Azure emulator. However as we have not been driven there yet, we will configure visual studio to run this site via IIS.

The links below will explain how to configure your choice of web server for your project to run under.

[ http://msdn.microsoft.com/en-us/library/ms178108%28v=vs.100%29.aspx ]

[ http://ukchill.com/technology/setting-up-a-web-project-environment-in-visual-studio-2010-to-allow-debugging-using-both-iis7-and-the-development-web-server/ ]

With IIS configured we have proved the potential for this step to succeed. We can see below that without ACS implemented we are successfully able to Test Drive to the Home action of the site.


StoryQ guides our efforts by showing a textual map of what we have done and what we have left to achieve.


Step 3 : I Am Taken To The ACS Provider Chooser Page

To Achieve this step we need to add some acceptance criteria and then we need to configure ACS. First let us set our expectations for this step.


In the code above we have a search to retrieve all div elements on page, followed by a Query to make sure the ACS sign on text can be found in the collection. Note that there are many elements that we could have checked on the page, including a fragment of the URL. As there was a choice, we have chosen to select just enough to get the job done and fix the Query and Criteria if it proves to be problematic.

Note: the selection criteria above will vary dependent upon the Identity providers you have configured. The above criteria works when we have selected multiple identity providers, in this case google and Windows Live.

The next step is to configure the pathways to ACS. For this to succeed you need to have configured an ACS namespace as per the directions supplied on the links earlier in this article.

Identity And Access Tool





The Identity and Access Tool can be used to bypass a lot of manual configuration and pain. It is a substantial improvement on the WIF tool kit. Below is a link which provides details of how to configure the tool .



Note: We have, on occasions, found that we have needed to restart Visual Studio 2012 multiple times when installing this tool, before the Identity and Access option has been available on the context menu.

Note: We have found with our ASP MVC projects when using this tool, that we then need to add some namespaces as reference. You can check what these are by taking a quick peek at the web.config file.


We found we needed to add System.IdentityModel.


If you want to check your configuration , set the ASP MVC project as the start up project and press f5 to run the project. Depending on the identity providers you configured, you should be presented with a challenge page.


Once you have logged in you , you should be presented with your Home page for your ASP MVC Application


With the ACS bridge in place and properly configured, we will now head back to our Story and the step we were completing. We need to rerun the test to check the expectation.



After running the test Firefox starts up and goes to the ACS page. The text inside the div is found and we have another passing step in our story as well as a configured ACS bridge.

Next we will pick an identify provider. We will then login to the ACS page using the web driver and authenticate.

Step 4 : I Have To Pick A Identity Provider

We will use the web driver to login to Windows Live as the Identity Provider.


We can now execute this step


Success –  The test now executes the step to take the driver to the Windows Live login page

Step 5 : I Have To Login To My Identity Provider

To provide this functionality we have written a few extension methods. There is no magic here, the secret of this technique is in its simplicity.  We need only do the following :

  • Find the relevant elements on the page
  • Enter the expected Text
  • Click a button
  • Capture a confirmation dialog and accept it
Step implementation


Extension Methods


The ACS provider after the Email address and Password have been filled in


Success – The Home screen, post click of submit button and acceptance of Confirmation Dialog


Test output showing passing step


Step 6 : I am Taken To The Site

The final step asserts that we are in fact on the home page and we are done


Let us now execute this test and hopefully we should have a green passing story.



Recap – What have we just done

Let us just take a moment to take a breath and look back at what we have achieved.

  • We now have a ASP MVC 4 application shell
  • The ASP MVC 4 application shell is integrated with ACS
  • We have security test coverage across the homepage. We can add other scenarios to our Stories if we want to make them implement ACS security.
  • We have introduced the the concepts of Black Box Browser based BDD.
  • We have had a short discussion about the need to reinterprete and reframe requirements


    In this short article we have introduced a lot of new material ,and supplied a number of links to in-depth discussions and tutorials on the  relevant subject areas. It is our hope that this acts as a spring board to deeper learning and fun with Azure, ACS and BDD.
      Author Beth
      Follow @Martindotnet

Step By Step Guides – Inversion of Control and associated principals


In the following text we will use a number of terms and diagrams that perhaps introduce some complexity, you may not have come across these terms before so I will take a moment to try to explain them.

  • Composition : Aggregation / Aggregated Object – When we talk in terms of object composition what we are really saying is that the final object graph is made up of many objects coming from one root object. To put this another way is to say that an object can contain one or more objects and that these make up the fully composed object. This in its simplest terms is what we refer as object composition.

However there is a subtle variant of object composition called object aggregation, although these are quite often used interchangeably, they are not strictly the same.

Composition is generally used when the object creates the instance of the object being made part of its composed child model. Thus when the root composing object lifeline is terminated, or to put this another way when the root object is destroyed then, the associated child objects which make up the rest of the object graph are also destroyed.

Aggregation differs from object composition in that an object which is aggregated generally has an independent lifeline, and we find quite often it is shared between many objects which aggregate it by reference as part of their object graph. We can therefore say that an aggregated object generally is not explicitly destroyed when a class using that object is destroyed.

  • Patterns: Separation Of Concerns and Single Responsibility Principal – Separation Of Concerns otherwise referred to as a [SOC] is a core principal by which we simply state that we should separate our concerns within the application. Traditionally this has involved a horizontal style of separation, or more correctly separating along technical boundaries. If we try to define the MVC pattern in these terms we could say that only presentation logic would be placed in a View. Consequently we could also say that data validation should be placed in the model, and finally we could say that the only place decision logic should be found is in the Controller. [SOC] has a sister pattern known as Single Responsibility Principal or [SRP], when this is applied we say that one class, one operation should only do one thing. This can be scaled to also say one assembly should have a single responsibility, although this is a little abstract and contentious.

Since the emergence of modern architecture principals, [SOP] and [SRP] have been used to identify vertical separation of concern. Taking this into consideration, while applying a Service Orientated Architecture [SOA], we can say that each service has a distinct responsibility. For instance, a ‘Security Service’, or an ‘Order Service’.

The tools we are using

Our own development setup is as follows:

·Windows 8 http://windows.microsoft.com/en-US/windows-8/release-preview

· Visual Studio 2012 http://www.microsoft.com/visualstudio/11/en-us

· Resharper 7 http://www.jetbrains.com/resharper/whatsnew/index.html

· NUnit http://nuget.org/packages/nunit

· NUnit Fluent Extensions http://fluentassertions.codeplex.com/

· NSubstitute http://nsubstitute.github.com/

We find that the above combination of software packages makes for an exceptional development environment. Windows 8 is by far the most productive Operating System we have used across any hardware stack. Jet Brains Resharper has become an indispensable tool, without which Visual Studio feels highly limited. NUnit is our preferred testing framework, however you could use MBUnit or XUnit. For those who must stick with a pure Microsoft ALM experience you could also use MSTest.

We also use a mocking library; in this case we have used NSubstitute due to its power and simplicity. However there are other options available with the same functionality, such as Moq, and Rhino Mocks. We feel libraries like Type Mock should be avoided due the lack of respect for the black box model, this lack of respect allows developers to take short cuts which invalidate good practises.

What Is Inversion of Control [IOC]?

IOC is the practise of inverting the control of a given object. Typically in a scenario that does not use IOC an object is constructed and used by an Aggregating object. The diagram below demonstrates the aggregated relationship.


One of the main aspects of the IOC scenario is that we invert the construction of the Object. In real terms this means that the object is constructed externally to the aggregating object. Inverting the creation of an object through constructing it prior to its use by the consuming / aggregating object has a number of advantages, some of which are listed below:

  • Object initialisation can be controlled and the object’ state can be more carefully provided for.
  • The object can be registered with an external collection.
  • We can keep a reference or hook externally to the constructed object without violation of black box principals.
  • The constructed objects events can be wired up to an external object. This is especially useful for the implementation of various patterns, such as the Visitor and Observer patterns.

We can see below a classic representation of composition by in object instantiation –


“That ‘Object A’ creates an instance of ‘Object B’”

What is wrong with the shape above?

The shape above, with Object A instantiating then subsequently aggregating and encapsulating Object B, leads to a scenario were Object B is hidden from the outside world. This leads to a number of issues and we will examine these in a moment. First let us just note that there are two other solutions to this issue which we list below. These fit outside the IOC / DI pallet, and this article will not attempt to address or discuss them as they are outside its scope.

  • Service Location Pattern
  • Invalidation of Black box hiding principles. by publically exposing the object

It is worthy of note that there is a huge debate raging about the value of the Service Location Pattern and it is one we have no intention to enter into currently.

Both the solutions above have a tendency to lead to bad practises. The danger when these techniques are used incorrectly is that it can lead to the invalidation of two other very important principals.

  • SOC : Separation Of Concerns
  • SRP : Single Responsibility Principle

Dynamic operation invocation dependent on runtime conditions

One of the issues IOC / DI helps to address is needing to decide which object or block of functionality we should invoke dependent on runtime conditions.

This is a huge issue with direct instantiation of aggregated objects. Let us think about a situation where we have to make a different operation call dependent on some condition, let us for a moment assume it is a condition that is only available to us at run time. We can in fact see this expressed by the following code fragment below.


The code above highlights the situation where a class has to make a decision about which service to call dependent upon a value that is only available at run time. As we can see the contract signatures of the two classes are the same, so effectively the aggregating class is having to instantiate two objects where it only needs one, and making identical calls to one or the other based upon logic which is outside its SOC responsibilities.


The principal difference between the two code examples is that creational control of the DataService being ultimately used by the ManagerService, or in context of this article the Aggregating class, has been Inverted and moved outside the ManagerService. The effect of this is that the class maintains clean, simple code which is in line with the SOC principal when applied in this context.

How do we apply Inversion of control using Dependency Injection?

Working with inversion of control we have to consider how we pass an object, which we have inverted the construction of, into the intended aggregating object. In this section we will consider Dependency Injection and how this resolves and satisfies this requirement.

We can see in the code sample in the last section that we injected the Object to be used via a parameter on the constructor. This is often referred to as constructor injection. There are in fact two common dependency injection implementations:

  • Constructor based injection Constructor based injection involves declaring the dependency by its interface on the constructor parameter list, we show a small example below.


In the example above, we can see that an object at runtime which corresponds to the IDataService interface needs to be injected into the constructor of the ManagerService. When injected into the constructor at object construction, the service is then stored for later use in an immutable class level field.

This is a classic example of Constructor based Dependency Injection.

  • Setter based injection Setter based injection methodology has fallen out of favour in modern codebases because this injection technique is, by default, non-mandatory. This is opposed to Constructor based injection which insists that its parameters are satisfied at point of construction. However setter based injection is still a valid method and therefore we shall show an example of this below.


The above code shows an exposed setter which has been set up as a Dependency Injection target. In this case we have used Microsoft Unity’s Dependency Attribute to illustrate the property as a Setter injection target, we will talk about injection containers in more depth shortly. In this scenario we can see that the outside system can inject a dependency via a setter – depending on the injection container or composition system being used we can enforce a dependency or mark it as optional.

Setter based injection has been utilized quite often for setting up defaults across an object, the issue with this approach is that it quite often hides bad design choices. We would like to suggest that setter based injection should only be considered if constructor based injection is not appropriate, or if you find yourself in a specialised situation such as utilising MEF [ http://mef.codeplex.com/ ] for View Model / Model injection.

In this short section we have taken a quick tour of dependency injection, and the two methods commonly used to expose dependencies as injection targets. We shall now move on to the real world use of DI and IOC and start to talk about containers and how they empower these principals and provide immense power and flexibility to our codebases.

IOC and DI in the Real World

When designing a modern application using concepts such as Emergent Architecture and some form of test driven development, it becomes a natural process to utilise design principals such as SOC and SRP, this will often mean that we end up with large object graphs which would be inconvenient to manually inject into objects.

Modern development teams utilize IOC Containers such as those listed below

Microsoft Unity supports both Setter Injection and Constructor Injection. We have discussed some of the issues with Setter Injection and due to these issues, the rest of this article will focus on Constructor Injection using the Unity Container.

Let us now Test Drive a simple example using ASP MVC. We will produce an application which makes use of Unity and some community extensions which provide the binding bridge between ASP MVC and the Unity IOC container.

ASP MVC Example

We will now build a trivial application which utilizes

  • The concepts expressed so far in this article
  • The Unity Framework
  • The ASP MVC Framework
  • Visual Studio 2012
  • Resharper 7
  • NUnit
  • NUnit Fluent Extensions

The application will demonstrate a Specification Driven ASP MVC controller. We will utilize SOC by separating our business functionality from our presentation framework. Our final deliverable application will wire in a Unity container with any needed frameworks to provide a working application.

The purpose of the application will be to Base64 encode text fragment.

We start with a test

As always with modern development in an emergent setting we start with a test.


Writing the test above generated an ASP MVC project within which the following artefacts are of interest to our discussion.

The controller shown below has a dependency on the IEncoderService, in our case we are passing in a Base64 Encoder service. We could use the same controller to encode to anything we please just by changing what we pass into the constructor. Below, for your reference, is the base64 encoding service and model.





Once the test has been completed and is passing thusly.


Then it is simply a matter of running the website to see what we need to do next.


We receive an exception page when we ask the ASP MVC framework to load the view, this is because the framework is trying to create the controller. However because we are making use of Dependency Injection and injecting the EncoderService into the controller, the framework can only find a parameterised constructor and therefor is unable to create an instance of the controller. Thus we need to give the framework a little help.

Fixing the Application

Step 1: We now need to bring in a ASP MVC Bridge to Unity, we could hand-code this but Nuget already has a useful package [Unity.Mvc3] in it is repository which supplies the functionality we need.


Step 2: With the Unity MVC 3 package in place, we now need to tell the application how to use it fortunately we find it has added a Readme file to our solution, within this file we find the following text.

“To get started, just add a call to Bootstrapper.Initialise() in the Application_Start method of Global.asax.cs and the MVC framework will then use the Unity.Mvc3 DependencyResolver to resolve your components.

As directed above we will now insert a call to the Bootstrapper into the Global.asax


Step 3: With Bootstrapper registered construction is now delegated via the UnityDependencyResolver. This uses the Unity Container, in which we have not yet registered the IEncoderService Interface against concrete implementation, thus we now see a new error when we try to run the site.


Step 4: Is to follow the guidance in the error message and add an implementation for the interface (IEncoderService), which is expressed as a dependency on the controller we are trying to construct. Lets do that right now.


We have now registered the IEncoderService with the container and told it that the EncoderServiceBase64 is to be injected into any object requiring an implementation of the IEncoderService. In addition we have told the container to use a ContainerControlledLifetimeManager, which means that we will share the same instance of the EncoderServiceBase64 service across all instance that require it, this in effect is a Singleton Pattern implementation [ http://en.wikipedia.org/wiki/Singleton_pattern ].

Finally let us now rerun the application and see if we are exception free.


The View source is below


Let us now enter some text to Base64 Encode


If we submit this query we can see that we hit the breakpoint I have set in the injected Encoder Service. This proves that Unity has composed the Controller for us as expected.


Let us continue with the application execution and hopefully see the end result of the encoded operation.


The View source is below



This article has covered a lot of ground and introduced a lot of new concepts, ultimately the point of development is producing good quality software that can be expanded and maintained with relative ease, while IOC and DI are not the complete answer they do form part of the solution. IOC and DI should be in every professional developer’s toolbox we hope this article has demonstrated both the theory and the real world power of Inversion of Control and Dependency Injection.



follow @martinsharp