Fluent Management update and Wiki

It’s been a long time coming but I’ve finally added a Fluent Management Wiki online. Thankfully Azure Websites makes it very easy to add WordPress instances 馃檪

Anyway, the changelist is immense from version 0.4 to but the most notable is IaaS support. There is also some bug fixes and quite a few breaking interface聽changes.

My hope is that the Wiki will generate聽enough聽feedback聽for us to understand how people are using it, what they want to see and what their issues are so please feel free to be brutal. It’s only going to turn into a great library if you are!

The Wiki can be viewed @ http://fluentmanagement.elastacloud.com


Step By Step Guides -Getting started with White Box Behaviour Driven Development

What is White Box BDD ?

White Box BDD聽 is built around the concept of writing meaningful Unit tests. With White Box Behaviour Driven Development sometimes referred to as SDD or Specification Driven Development聽 we are trying to provide a behaviour statement around an object or a tree of objects which supply a given set of functionality.

White Box BDD attempts to provide a way to describe the behaviour within a system unlike black box BDD which classically attempts describe a behaviour of a whole vertical slice of a system.

The tools we are using

My own development setup is as follows:

路 Windows 8 http://windows.microsoft.com/en-US/windows-8/release-preview

路 Visual Studio 2012 http://www.microsoft.com/visualstudio/11/en-us

路 Resharper 7 http://www.jetbrains.com/resharper/whatsnew/index.html

路 NUnit http://nuget.org/packages/nunit

路 NUnit Fluent Extensions http://fluentassertions.codeplex.com/

I find the above combination of software packages to be absolutely stunning as a development environment and Windows 8 is absolutely the most productive Operating System I have ever used across any hardware stack, as always without Jet Brains Resharper Visual studio feels almost crippled. NUnit is my go to testing framework but of course you have MBUnit, XUnit and for those who must stick with a pure Microsoft ALM style experience you have MSTest. Overlaying a Fluent extensions library allows me to write more fluent and semantic assertions which falls in line with the concepts of White Box BDD. For most of the frameworks above there are similar sets of extensions available, MSTest is to my knowledge the exception to the rule.

The http://fluentassertions.codeplex.com/ framework is particularly powerful as this allows you to chain assertions as shown by some of the examples taken from their site below.

var recipe = new RecipeBuilder()

.With(new IngredientBuilder().For(“Milk”).WithQuantity(200, Unit.Milliliters))


Action action = () => recipe.AddIngredient(“Milk”, 100, Unit.Spoon);



.WithMessage(“change the unit of an existing ingredient”, ComparisonMode.Substring)



string actual = “ABCDEFGHI”;


There are other choices and the techniques contained in this article can be used with earlier versions of the Microsoft development environments and indeed in most modern languages you can find tooling support including but not limited to:

路 Java 鈥 Intilij, Eclipse + JUnit

路 Php

路 Ruby On Rails 鈥 Ruby Mine

In fact the techniques can be universally applied to any modern development environment and I am yet to come across a project that cannot be tested where it has been correctly conceptualized and developed in line with SOLID http://en.wikipedia.org/wiki/Solid_(object-oriented_design) principals.

Let鈥檚 look at some code

Step 1 – Add a Specification Project

First step is to create a specification library because we are going to use our specifications to drive our package and project choices we do not start with a production code project such as an ASP MVC project but instead we only need at this stage add our specification library.

This introduces an important principal which drives White Box BDD called Emergent Design this principal says that the natural act of coding will force the correct design for the application to emerge and therefore design upfront with the exception of big block conceptualization is invalid and in fact we should allow our architecture to naturally emerge.

So with this principal in mind let us open the new project dialog and add a class library. clip_image002

Looking at the image above you can see that we have used the .Specifications extension to the project name which will naturally be reflected in the project namespace, this is my personal preference and for me implies the intent of the project.

Step 2 鈥 Write Test based on your requirements

For the scope of this demonstration I am going to keep the requirements very simple I will outline them below using Gerkin鈥檈sk style syntax.

Given that I have a set of text

When I pass the text into the Encoder

Then I expect to be given back a correctly encoded base 64 fragment of text

Next we write a test definition you can see that the TestFixture and Test have not been resolved. clip_image004

This is聽 because we will not add frameworks till we need them, so we will add them next. clip_image006

Next we add the packages via Nuget

The next step is to add the acceptance criteria. It is crucial that this is the first step we start with


What you will notice is that Resharper has marked the things it doesn鈥檛 know about in RED. We now follow the RED clicking on the RED and using Resharper鈥檚 Alt+Enter command helps us to do this.

When we have finished following the RED we find we have driven out a basic shell for the encoder

While I was getting the expectedEncodedText from an online base 64 encoder site I released that Encode() needed a parameter passing in for the source Text so let鈥檚 add this. It鈥檚 worth noting we didn鈥檛 forget about this but rather we waited till we were driven to implement it. This Is fundamental to this style of coding.

With a little typing to add the parameter sourceText in the test which of course went RED , then a few Alt+ Enter鈥檚 we see that the parameter is pushed down into our EncoderService class.

Next we run the test and we see that we are RED and we hit The NotImplementedException


Step 2 鈥 Make the Test Go Green

The next step we need to make the test pass or go green this will give to us a guarantee of the behaviour or specification as expressed by the assertion.

First of all I add the base64 encoding code


Lets run the test again.


We now find that we are Green.


This is a rather trivial example but what have we done here in easy to understand points:

路 One we described a behaviour or rather a specification for a behaviour using the test name and the Assert.

路 Two we drove out the code directly from the Test using Resharper Magic.

路 Three our end point was a passing specification.

I hope this example has been useful if basic.



Building a virus scanning gateway in Windows Azure with Endpoint Protection

I remember being on a project some 9 years ago and having to build one of these. To build a realtime gateway is not as easy as you would think. In my project there were accountants uploading invoices of various types and formats that we had to translate into text using an OCR software package. We built a workflow using a TIBCO workflow designer solution (which I wouldn’t hesitate now to replace with WF!)

At a certain point people from outside the organisation had the ability to upload a file and this file had to be intercepted by a gateway before being persisted and operated on the through the workflow. You would think that this was an easy and common solution to implement. However, at the time it wasn’t. We used a Symantec gateway product and its C API which allowed us to use the ICAP protocol and thus do real time scanning.

Begin everything with a web role

Begin everything with a web role

For the last 6 months I’ve wanted to talk about Microsoft Endpoint Protection (http://www.microsoft.com/en-us/download/details.aspx?id=29209) which is still in CTP as a I write this. It’s a lesser known plugin which exists for Windows Azure. For anybody that receives uploaded content, this should be a commonplace part of the design. In this piece I want to look at a pattern for rolling your gateway with Endpoint Protection. It’s not ideal because it literally is a virus scanner, enabling real time protection and certain other aspects but uses Diagnostics to show issues that have taken place.

The files which are part of the endpoint protection plugin

The files which are part of the endpoint protection plugin

So initially we’ll enable the imports:

<Import moduleName="Diagnostics" />
<Import moduleName="Antimalware" />
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />

You can see the addition of Antimalware here.

Correspondingly, our service configuration gives us the following new settings:

<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="<my connection string>" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation" value="North Europe" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware" value="true" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection" value="true" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans" value="false" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans" value="7" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans" value="120" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions" value="txt|rtf|jpg" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths" value="" />
<Setting name="Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses" value="" />

The settings are using Endpoint Protection for real time protection and scheduled scan. It’s obviously highly configurable like most virus scanners and in the background will update all malware defitions securely from a Microsoft source.

Endpoint protection installed on our webrole

Endpoint protection installed on our webrole

First thing we’ll do is download a free virus test file from http://www.eicar.org/85-0-Download.html. Eicar has ensured that this definition is picked by most of the common virus scanning so Endpoint Protection should recognise this immediately. I’ve tested this with the .zip file but any of them are fine.

The first port of call is setting up diagnostics to proliferate the event log entries. We can do this within our RoleEntryPoint.OnStart method for our web role.

var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
//exclude informational and verbose event log entries
config.WindowsEventLog.DataSources.Add("System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3 or Level=4)]]");
//write to persisted storage every 1 minute
config.WindowsEventLog.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1.0);
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", config);
Diagnostics info in Azure Management Studio

Diagnostics info in Azure Management Studio

Okay, so in testing it looks like the whole process of cutting and pasting the file onto the desktop or another location takes about 10 seconds for the Endpoint Protection to pick this up and quarante the file. Given this we’ll set the bar at 20 seconds.

Endpoint protection discovers malware

Endpoint protection discovers malware

I created a very simple ASP.NET web forms application with a file upload control. There are two ways to detect whether the file has been flagged as malware:

  1. Check to see whether the file is still around or has been removed and placed in quarantine
  2. Check the eventlog entry to see whether this has been flagged as malware.

We’re going to focus on No.2 so I’ve created a simple button click event which will persist the file. Endpoint protection will kick in within the short period so we’ll write the file to disk and then pause for 20 seconds. After our wait we’ll then check the eventlog and in the message string we’ll have a wealth of information about the file which has been quarantined.

bool hasFile = fuEndpointProtection.HasFile;
string path = "";
	path = Path.Combine(Server.MapPath("."), fuEndpointProtection.FileName);
// block here until we check endpoint protection to see whether the file has been delivered okay!
var log = new EventLog("System", Environment.MachineName, "Microsoft Antimalware");
foreach(EventLogEntry entry in log.Entries)
	if(entry.InstanceId == 1116 && entry.TimeWritten > DateTime.Now.Subtract(new TimeSpan(0, 2, 0)))
        	if(entry.Message.Contains(value: fuEndpointProtection.FileName.ToLower()))
			Label1.Text = "File has been found to be malware and quarantined!";
Label1.Text = path;
When I upload a normal file

When I upload a normal file

When I upload the Eicar test file

When I upload the Eicar test file

The eventlog entry should look like this, which contains details on the affected process, the fact that it is a virus and also some indication on where to get some more information by providing a threat URL.




   The operation completed successfully.

   No additional actions required

   AV: 1.131.1864.0, AS: 1.131.1864.0, NIS:
   AM: 1.1.8601.0, NIS:

Okay, so this is very tamed example but it does prove the concept. In the real world you may even want to have a proper gateway which acts as a proxy and then forwards the file onto a "checked" store if it succeeds. We looked at the two ways you can check to see whether the file has been treated as malware. The first, checking to see whether the file has been deleted from it's location is too non-deterministic because although "real time" means real time we don't want to block and wait and timeout on this. The second is better because we will get a report if it's detected. This being the case, a more hardened version of this example will entail building a class which may treat the file write as a task and asynchronously ping back the user if the file has been treated as malware - something like this could be written as an HttpModule or ISASPI filter pursue the test and either continue with the request or end the request and return an HTTP error code to the user with a description of the problems with the file.

Happy trails etc.

The issues you may have with the Azure co-located cache 1.7 and other caching gems

On Tuesday 10th July I did a talk for the UKWAUG on using the new co-located cache. Since we’ve been working caching into our HPC implementation successfully I’ve become a dabhand on the dos and donts which I wanted to publish.

In my previous post I spoke about the problem with AppFabric Server but there are a few more problems which rear their head and rather than let you sit there scratching your heads for days I thought I’d write down the gotchas I presented to help you on your merry way. So if you want to read about the AppFabric server issues go to the earlier post here. One thing that has stupidly caught me out is a manifested issue which I created myself. It took me hours to realise what I was doing wrong and it happened because I am a hack! When the cache starts up it leaves four marker files in the storage account you set up in your configuration at compile time. The files have names like this f83f8073469549e1ba858d719238700f__Elastaweb__ConfigBlob and the four present means that cache configuration is complete and it now has an awareness of where the nodes in the cache cluster are. These blobs are in a well-known storage container called cacheclusterconfig. The cache will reference the latest files in this container which are order dependent by time so if the settings are overwritten by something else then don’t expect the health of your cache to be good! Each node has Base64 config data associated with it in this file so the cluster won’t effectively exist you’re using the same account for multiple cache clusters from different deployments. Be aware, don’t reuse in this way for staging, production and as in my case the devfabric! A small factor you should be aware of is the time it takes to install the cache. I’ve run several hundred tests now and generally it takes an additional 9 minutes to install the cache services when you include the plugin. Be aware of this because if you have time dependent deployments it will impact them. Using diagnostics with the cache is not that clearcut. In the spirit of including some code in this blogpost here is my RoleEntryPoint.OnStart method.

        public override bool OnStart()
            CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
                    // Provide the configSetter with the initial value

                    RoleEnvironment.Changed += (sender, arg) =>
                            if (arg.Changes.OfType().Any((change) =>
                                                                                                    (change.ConfigurationSettingName ==
                                // The corresponding configuration setting has changed, so propagate the value
                                if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
                                    // In this case, the change to the storage account credentials in the
                                    // service configuration is significant enough that the role needs to be
                                    // recycled in order to use the latest settings (for example, the 
                                    // endpoint may have changed)

            // tracing for the caching provider
            DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
            diagConfig = CacheDiagnostics.ConfigureDiagnostics(diagConfig);
            //// tracing for asp.net caching diagnosticssink
            diagConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Warning;
            diagConfig.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(2);
            // performance counters for caching
            diagConfig.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(2);
            diagConfig.PerformanceCounters.BufferQuotaInMB = 100;
            TimeSpan perfSampleRate = TimeSpan.FromSeconds(30);
            diagConfig.PerformanceCounters.DataSources.Add(new PerformanceCounterConfiguration()
                CounterSpecifier = @"\AppFabric Caching:Cache(azurecoder)\Total Object Count",
                SampleRate = perfSampleRate
            diagConfig.PerformanceCounters.DataSources.Add(new PerformanceCounterConfiguration()
                CounterSpecifier = @"\AppFabric Caching:Cache(default)\Total Object Count",
                SampleRate = perfSampleRate
            DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagConfig);

            return base.OnStart();

The first important line here is this:


You’ll want to configure diagnostics for the cache to find out what’s going on. Unfortunately with the default settings in place this will fail with a quota exception. There is a default size of 4000 MB set on the log limit and this call violates it by 200MB. Unfortunately the quota property cannot be increased, only decreased, so the fix for this is to change the default setting in the ServiceDefinition.csdef from:

      <LocalStorage name="Microsoft.WindowsAzure.Plugins.Caching.FileStore" sizeInMB="1000" cleanOnRoleRecycle="false" />


      <LocalStorage name="Microsoft.WindowsAzure.Plugins.Caching.FileStore" sizeInMB="10000" cleanOnRoleRecycle="false" />

Notice also the addition of the performance counters. There are 3 sets but I’ve illustrated the most basic but also most important metrics for us which show the numbers of cache objects. The counters are well-documented on MSDN. Anyway, hope yo found this helpful. I’ll be publishing something else on memcache interop shortly based on testing I’ve done in Java and .NET.

Tricks with IaaS and SQL: Part 2 – Scripting simple powershell activities and consuming the Service Management API in C# with Fluent Management

In the last blogpost we looked at how we could use powershell to build an IaaS deployment for SQL Server 2012. The usage was pretty seamless and it really lends itself well to scripted and unattended deployments of VMs. The process we went through showed itself wanting a little in that we had to build in some unwanted manual tasks to get a connection to the SQL Server. We looked at the provision of firewall rules, moving from Windows Authentication to Mixed Mode authentication and then adding a database user in an admin role.

The unfortunate fact is that this process can never be seamless (unlike PaaS) with the default gallery images since you cannot control the running of a startup script (nor would you want to). So to dive in we’ll look into building a powershell script that can do all of the above which can just be copied via remote desktop and executed.

The first part of the script will update the registry key so that we can test our SQL Server connection locally.

#Check and set the LoginMode reg key to 2 so that we can have mixed authentication
 Set-Location HKLM:\
 $registry_key = "SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQLServer"
 var $item = (Get-ItemProperty -path $registry_Key -name LoginMode).loginmode
 If($item -eq 1) {
 # This is Windows Authentication we need to update
 Set-ItemProperty -path $registry_key -name "LoginMode" -value 2

When this is done we’ll want to open up the firewall port on the machine. Whether we our goal is to use Windows Authentication or Mixed Mode, or only expose the SQL Server to a Windows network that we create as part of an application – so only available internally we’ll still need to open up that firewall port. We do this through the use of a COM object which will allow us to set various parameters such as a port number, range and protocol.

# Add a new firewall rule - courtesy Tom Hollander
 $fw = New-Object -ComObject hnetcfg.fwpolicy2
 $rule = New-Object -ComObject HNetCfg.FWRule
 $rule.Name = "SQL Server Inbound Rule"
 $rule.Protocol = 6 #NET_FW_IP_PROTOCOL_TCP
 $rule.LocalPorts = 1433
 $rule.Enabled = $true
 $rule.Grouping = "@firewallapi.dll,-23255"
 $rule.Profiles = 7 # all
 $rule.Action = 1 # NET_FW_ACTION_ALLOW
 $rule.EdgeTraversal = $false

Lastly, we will need to add a user that we can test our SQL Server with. This is done through SQL statements and stored procedures. You can see the use of sqlcmd here. This is by far the easiest way although we could have used SMO to do the same thing.

# add the new database user
 sqlcmd -d 'master' -Q "CREATE LOGIN richard1 WITH PASSWORD='icanconnect900'"
 sqlcmd -d 'master' -Q "EXEC sys.sp_addsrvrolemember @loginame = N'richard1', @rolename = N'sysadmin'"

Take all of this and wrap it into a powershell file “.ps1”.

The point of this second post was to show that you could do exactly what we did in the first post programmatically as well. This is what we’ve done through a branch of our Fluent Management library which will now support IaaS. One of the reasons we’ve been very keen to integrate IaaS programmatically is because we feel that the hybrid scenarios of PaaS and IaaS are a great mix so to be able to inevitably this mixture transactional in the same way is a good goal for us.

var manager = new SubscriptionManager(TestConstants.InsidersSubscriptionId);

So in one line of code we now have the equivalent of the powershell script in the first part. Note that this is a blocking call. When this returns initially a 202 Accepted response is retuned and then we continue to poll in the background using the x-ms-request-id header as we previously did with PaaS deployments. On success Fluent Management will return unblock.

From the code there are key messages to take away.

  1. we continue to use our management certificate with the subscription activity
  2. we need to provide a storage account for the VHD datadisks
  3. we can control the size of VM which is new thing for us to be able to do in code (normally the VmSize is set in .csdef but in this case we don’t have one or a package)
  4. we have to have a cloud service already existing with which to add the deployment to

In many of the previous posts on this blog we’ve looked at the Service Management API in the context of our wrapper Fluent Management. The new rich set of APIs that have been released for Virtual Machines make for a good set of possibilities to do everything that is easy within the CLI and Powershell right now enabled within an application.

Happy 4th July to all of our US friends (for yesterday!)

Copying Azure Blobs from one subscription to another with API 1.7.1

This is a great feature!

I read Gaurav Mantri’s excellent blog post on copying blobs from S3 to Azure Storage and realised that this had been the feature we’d been looking for ourselves for a long time. The new 1.7.1 API enables copying from one subscription to another, intra-storage account or even inter-data centre OR as Gaurav has shown between Azure and another storage repo accessible via HTTP. Before this was enabled the alternative was to write a tool to read the blob into a local store and then upload to another account generating ingress/egress charges and adding a third unwanted wheel. You need to hit Azure on GitHub – as yet it’s not released as a nuget package. So go here and clone the repo and compile the source.

More on code later but for now let’s consider how this is being done.

Imagine we have two storage accounts, elastaaccount1 and elastaaccount2 and they are both in different subscriptions聽but we聽need to copy a package from one subscription (elastaaccount1) to another (elastaaccount2) using the new method described above.

Initially the API uses an HTTP PUT method with the new HTTP header x-ms-copy-source which allows elastaaccount2 to specify an endpoint to copy the blob from. Of course, in this instance we’re assuming that there is no innate security on this and the ACL is opened up to the public but it may be the case that this isn’t so in which case a Shared Access Signature should be used which can be generated fairly easily in code from the source account and appended to the URL to allow the copy to ensue on a non-publicly accessible Blob.

PUT http://elastaaccount2.blob.core.windows.net/vanilla/mypackage.zip?timeout=90 HTTP/1.1
x-ms-version: 2012-02-12
User-Agent: WA-Storage/1.7.1
x-ms-copy-source: http://elastaaccount1.blob.core.windows.net/vanilla/mypackage.zip
x-ms-date: Wed, 04 Jul 2012 16:39:19 GMT
Authorization: SharedKey elastastorage3:<my shared key>
Host: elastaaccount3.blob.core.windows.net

This operations returns a 202 Accepted. The API will then poll asynchronously since this is queued and then use the HEAD method to determine the status. The product team, on their blog, state that there is no SLA currently so you can have this sitting in a queue without an ackowledgement from Microsoft BUT in all our tests it is very, very quick within the same data centre.

HEAD http://elastaaccount1.blob.core.windows.net/vanilla/mypackage.zip?timeout=90 HTTP/1.1
x-ms-version: 2012-02-12
User-Agent: WA-Storage/1.7.1
x-ms-date: Wed, 04 Jul 2012 16:32:16 GMT
Authorization: SharedKey elastaaccount1:<mysharedkey>
Host: elastaaccount1.blob.core.windows.net

Bless the Fabric – this is an incredibly useful feature. Here is a class that might help save you some time now.

/// Used to define the properties of a blob which should be copied to or from
/// </summary>
public class BlobEndpoint
 /// The storage account name
 /// </summary>
 private readonly string _storageAccountName = null;
 /// The container name 
 /// </summary>
 private readonly string _containerName = null;
 /// The storage key which is used to
 /// </summary>
 private readonly string _storageKey = null;
 /// <summary> 
 /// Used to construct a blob endpoint
 /// </summary>
 public BlobEndpoint(string storageAccountName, string containerName = null, string storageKey = null)
   _storageAccountName = storageAccountName;
   _containerName = containerName;
   _storageKey = storageKey;

/// Used to a copy a blob to a particular blob destination endpoint - this is a blocking call
/// </summary>
public int CopyBlobTo(string blobName, BlobEndpoint destinationEndpoint)
   var now = DateTime.Now;
   // get all of the details for the source blob
   var sourceBlob = GetCloudBlob(blobName, this);
   // get all of the details for the destination blob
   var destinationBlob = GetCloudBlob(blobName, destinationEndpoint);
   // copy from the destination blob pulling the blob
   // make this call block so that we can check the time it takes to pull back the blob
   // this is a regional copy should be very quick even though it's queued but still make this defensive
   const int seconds = 120;
   int count = 0;
   while (count < (seconds * 2))
     // if we succeed we want to drop out this straight away
     if (destinationBlob.CopyState.Status == CopyStatus.Success)
   //calculate the time taken and return
   return (int)DateTime.Now.Subtract(now).TotalSeconds;

/// Used to determine whether the blob exists or not
/// </summary>
public bool BlobExists(string blobName)
   // get the cloud blob
   var cloudBlob = GetCloudBlob(blobName, this);
      // this is the only way to test
   catch (Exception)
   // we should check for a variant of this exception but chances are it will be okay otherwise - that's defensive programming for you!
     return false;
   return true;

/// The storage account name
/// </summary>
public string StorageAccountName
   get { return _storageAccountName; }

/// The name of the container the blob is in
/// </summary>
public string ContainerName
   get { return _containerName; }

/// The key used to access the storage account
/// </summary>

public string StorageKey
   get { return _storageKey; }

/// Used to pull back the cloud blob that should be copied from or to
/// </summary>
private static CloudBlob GetCloudBlob(string blobName, BlobEndpoint endpoint)
   string blobClientConnectString = String.Format("http://{0}.blob.core.windows.net", endpoint.StorageAccountName);
   CloudBlobClient blobClient = null;
   if(endpoint.StorageKey == null)
     blobClient = new CloudBlobClient(blobClientConnectString);
      var account = new CloudStorageAccount(new StorageCredentialsAccountAndKey(endpoint.StorageAccountName, endpoint.StorageKey), false);
      blobClient = account.CreateCloudBlobClient();
   return blobClient.GetBlockBlobReference(String.Format("{0}/{1}", endpoint.ContainerName, blobName));

The class itself should be fairly self-explanatory and should only be used with public ACLs although the modification is trivial to generate a time-dependent SAS.

A simple test would be as follows:

  var copyToEndpoint = new BlobEndpoint("elastaaccount2", ContainerName, "<secret primary key>");
  var endpoint = new BlobEndpoint("elastaaccount1", ContainerName);
  bool existsInSource = endpoint.BlobExists(BlobName);
  bool existsFalse = copyToEndpoint.BlobExists(BlobName);
  endpoint.CopyBlobTo(BlobName, copyToEndpoint);
  bool existsTrue = copyToEndpoint.BlobExists(BlobName);

There you go. Happy trails etc.

Tricks with IaaS and SQL: Part 1 – Installing SQL Server 2012 VMs using Powershell

This blog post has been a long time coming. I’ve sat on IaaS research since the morning of the 8th June. Truth is I love it. Forget the comparisons with EC2 and the maturity of Windows Azure’s offering. IaaS changes everything. PaaS is cool, we’ve built a stable HPC cluster management tool using web and worker roles and plugins – we’ve really explored everything PaaS has to offer in terms of services over the last four years. What IaaS does is change the nature of the cloud.

  1. With IaaS you can build your own networks in the cloud easily
  2. You can virtualise your desktop environment and personally benefit from the raw compute power of the cloud
  3. You can hybridise your networks by renting cloud ready resources to extend your network through secure virtual networking
  4. Most importantly – you can make use of combinations of PaaS and IaaS deployments

The last point is an important one because the coupling between the two can make use of well groomed applications which need access to services not provided out-of-of-box by PaaS. For example, there is a lot of buzz about using IaaS to host Sharepoint or SQL Server as part of an extended domain infrastructure.

This three part blog entry will look at the benefits of hosting SQL Server 2012 using the gallery template provided by Microsoft. I’ll draw on our open source library Azure Fluent Management and powershell to show how easy it is to deploy SQL Server and look at how you can tweak SQL should you need to set up mixed mode authentication which it doesn’t default to.

So the first question is – why use SQL Server when you have SQL Azure which already provides all of the resilience and scalability you would need to employ in a production application? Well … SQL Azure is a fantastic and a very economical use of SQL Server which provides a logical model that Microsoft manages but it’s not without it’s problems.

  • Firstly, it’s a shared resource so you can end up competing for resources. Microsoft’s predictive capability hasn’t proved to be that accurate with SQL Azure so it does sometimes have latency issues which are kinds of things that developers and DBAs go to great pains to avoid in a production application.
  • Secondly, being on a contentious shared infrastructure leads to transient faults which can be more prolific than you would like so you have to think about transient fault handling (ToPAZ is an Enterprise Application Block library which works pretty much out-of-the-box with your existing .NET System.Data codebase).
  • Thirdly, and most importantly, SQL Azure is a subset of SQL Server and doesn’t provide an all-encompassing set of services. For example, there is no XML support in SQL Azure, certain system stored procedures, synonyms, you can’t have linked servers so have to use multiple schemas to compensate. These are just a few things to worry about. In fact, whilst most databases can be updated, some can’t without a significant migration effort and a lot of thought and planning.

IaaS offers an alternative now. In one fell swoop we can create a SQL Server instance, load some data and connect to it as part of our effort. We’ll do exactly that here and the second part using our fluent management library. The third part will describe how we can use failover by configuring an Active-Passive cluster between two SQL Server nodes.

Let’s start with some powershell now.

If you haven’t done already download the WAPP powershell CmdLets. The best reference on this is through Michael Washam’s blog. If you were at our conference in London in June 22nd you would have seen Michael speak about this and will have some idea of how to use Powershell and IaaS. Follow the setup instructions and then import your subscription settings from a .publishsettings file that you’ve previously downloaded. The CmdLet to do the import is as below:

> Import-AzurePublishSettingsFile

This will import the certificate from the file and associate its context with powershell session and each of the subscriptions.

If you haven’t downloaded a .publishsettings file for any reason use:

> Get-AzurePublishSettingsFile

and enter your live id online. Save the file in a well-known location and behind the scenes the fabric has associated the management certificate in the file to the subscriptions that your live id is associated with (billing, service or co-admin).

A full view of the CmdLets are available here:


In my day-to-day I tend to use Cerebrata CmdLets over WAPP so when I list CmdLets it gives me an extended list. Microsoft WAPP CmdLets load themselves in as a module whereas Cerebrata are a snap-in. In order to get rid of all of the noise you’ll see with the CmdLets and get a list of only the WAPP CmdLets enter the following:

> Get-Command -Module Microsoft.WindowsAzure.Management

Since most of my work with WAPP is test only and I use Cerebrata for production I tend to remove most of the subscriptions I don’t need. You can do this via:

> Remove-AzureSubscription -SubscriptionName

If there is no default subscription then you may need to select a subscription before starting:

> Select-Subscription -SubscriptionName

Next we’ll have to set some variables which will enable us to deploy the image and set up a VM. From top to bottom let’s describe the parameters. $img contains the name of the gallery image which will be copied across to our storage account. Remember this is an OS image which is 30GB in size. When the VM is created we’ll end up with C: and D: drive as a result. C is durable and D volatile. It’s important to note that a 30GB page blob is copied to your storage account for your OS disk and locked – this means that an infinte blob lease is used to lock out the drive so only your VM Image can write to it with the requisite lease id. $machinename (non-mandatory), $hostedservice (cloud service), $size and $password (your default Windows password) should be self-explanatory. $medialink relates to the storage location of the blob.

 > $img = "MSFT__Sql-Server-11EVAL-11.0.2215.0-05152012-en-us-30GB.vhd"
 > $machinename = "ELASTASQLVHD"
 > $hostedservice = "elastavhd"
 > $size = "Small"
 > $password = "Password900"
 > $medialink = http://elastacacheweb.blob.core.windows.net/vhds/elastasql.vhd

In this example we won’t be looking at creating a data disk (durable drive E), however, we can have up to 15 data disks with our image each up to 1TB. With SQL Server it is better to create a datadisk to persist this and allow the database to grow beyond the size of the OS page blob. In part two of this series we’ll be looking at using Fluent Management to create the SQL Server which will be creating a data disk to store the .mdf/ldf SQL files.

More information can be found here about using data disks:


What the article doesn’t tell you is that if you inadvertantly delete your virtual machine (guilty) without detaching the drive drive first you won’t be able to delete it because of the infinite lease. You can also delete via powershell with the following CmdLets if you get into this situation which is easy to do!

> Get-AzureDisk | Select DiskName

This gives you the name of the attached data or OS disks. If you use Cerebrata’s cloud storage studio you should be able to see these blobs themselves but the names differ. The names are actually pulled back from a disks catalog associated with the subscription via the Service Management API.

The resulting output from the above CmdLet can be fed into the following CmdLet to delete an OS or data disk.

> Remove-AzureDisk -DiskName bingbong-bingbong-0-20120609065137 -DeleteVHD

The only thing necessary now is to be able to actually run the command to create the virtual machine. Michael Washam has added a -verbose switch to the CmdLet which helps us understand how the VM is formed. The XML is fairly self-explanatory here. First setup the VM config, then setup the Windows OS config and finally pipe everything into the New-AzureVM CmdLet which will create a new cloud service or use an existing one and create an isolated role for the VM instance (each virtual machine lives in a seperate role).

> New-AzureVMConfig -name $machinename -InstanceSize $size -ImageName $img -MediaLocation $medialink -Verbose | Add-AzureProvisioningConfig -Windows -Password $password -Verbose | New-AzureVM -ServiceName $hostedservice -Location "North Europe" -Verbose

By default Remote Desktop is enabled with a random public port assigned and a load balanced redirect to a private port of 3389.

Remote Desktop Endpoint

Remote Desktop Endpoint

In order to finalise the installation we need to setup another external endpoint which forwards request to port 1433, the port used by SQL Server. This is only half of the story because I also should add a firewall rule to enable access to this port via TCP as per the image.

SQL Server Endpoint

SQL Server Endpoint

Opening a firewall port for SQL Server

Opening a firewall port for SQL Server

You would have probably realised by now that the default access to SQL Server is via windows credentials. Whilst DBAs will be over the moon with this for my simple demo I’d like you to connect to this SQL instance via SSMS (SQL Server Management Studio). As we’ll need to update a registry key to enable mixed mode authentication via regedit:


And update the LoginMode (DWORD) to 2.

Once that’s done I should be able to startup SSMS on the virtual machine and create a new login called “richard” with password “Password_900” and remove all of the password rules and expiries so that I can test this. In addition I will need to ensure that my user is placed in the appropriate role (sysadmin is easy to prove the concept) so that I can then go ahead and connect.

Logging into SQL Server

Logging into SQL Server

I then create a new database called “icanconnect” and can set this as the default for my new user.

When I try to connect through my SSMS on my laptop I should be able to see the following as per the image below.

Seeing your database in SSMS

Seeing your database in SSMS

There’s a lot of seperate activites here which we’ll add to a powershell script in second part of this series. I realise that this is getting quite long and I was keen to highlight the steps involved in this without information overload. In part two we’ll look at how Azure Fluent Management can be used to do the same thing and how we can also script the various activities which we’ve described here to shrink everything into a single step.

Happy trails etc.