Practical Microsoft Azure Active Directory Blog Series

I finally had a chance to play with Microsoft Azure Active Directory in a recent project. I found the experience to be very interesting – Azure AD itself is an amazing, powerful product with a lot of potential. It certainly has a few rough edges here and there, but it’s pretty clear Microsoft are putting a lot of effort into it as it’s forming the cornerstone of how it authenticates all of it’s services including Office 365.

Azure AD gives you the ability to securely manage a set of users and also gives the added benefit of allowing two-factor authentication (2FA), single-sign-on across applications, multi-tenancy support and ability to allow external organisations to authenticate against your application.

This blog series will outline the minimum set of steps that you need to perform to quickly and easily add Azure AD authentication to an existing ASP.NET MVC 5 (or 3 or 4) site (or a new one if you select the No Authentication option when creating it) as well as configure things like API authentication, role authorisation, programmatic logins and deployments to different environments.

There are already tools and libraries out there for this – why are you writing this series?

Microsoft have made it fairly easy to integrate Azure AD authentication with your applications by providing NuGet packages with most of the code you need and also tooling support to configure your project in Visual Studio. This is combined with a slew of MSDN and technet posts covering most of it.

When it comes to trying to understand the code that is added to your solution however, things become a bit tricky as the documentation is hard to navigate through unless you want to spend a lot of time. Also, if you have Visual Studio 2013 rather than Visual Studio 2012 you can only add authentication to a new app as part of the File -> New Project workflow by choosing the Organizational Authentication option:

ASP.NET Organizational Authentication option
Visual Studio: File > New Project > ASP.NET > Change Authentication > Organizational Authentication

If you have an existing ASP.NET web application and you are using Visual Studio 2013 then you are out of luck.

Furthermore, the default code you get requires you to have Entity Framework and a database set up, despite the fact this is only really required if you are using multiple Azure AD tenants (unlikely unless you are creating a fairly hardcore multi-tenant application).

If you then want to add role-based authentication based on membership in Azure AD groups then there is no direction for this either.

For these reasons I’m developing a reference application that contains the simplest possible implementation of adding these features in an easy to follow commit-by-commit manner as a quick reference. I will also provide explanations of what all the code means in this blog series so you can understand how it all works if you want to.

You can see the source code of this application here and an example deployment here. The GitHub page outlines information such as example user logins and what infrastructure I set up in Azure.

What are you planning on covering?

This will be the rough structure of the posts I am planning in no particular order (I’ll update this list with links to the posts over time):

I’m notoriously bad at finishing blog series‘ that I start, so no promises on when I will complete this, but I have all of the code figured out in one way or another and the GitHub should at least contain commits with all of the above before I finish the accompanying posts so *fingers crossed*! Feel free to comment below if you want me to expedite a particular post.

More resources

I came across some great posts that have helped me so far so I thought I’d link to them here to provide further reading if you are interested in digging deeper:

Making Intent Clear / Derived Values [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Making Intent Clear

I think one of the most important things when writing tests (apart from consistency) is that they are clear in intent. If you buy into the notion that tests form part of the documentation of your system then it’s really important, like all good documentation, that the tests are both readable and understandable.

I think there are a number of techniques that can help with this in various situations and there are three in particular that I will be covering in this sub-section of the blog series. I have already covered test naming and I think that has a big impact on clarity of intent.

Derived Values

There are a number of excellent blog posts by Mark Seemann (@ploeh) in his zero-friction TDD series that I have found useful in my ongoing research and one in particular that really resonated with me was the concept of derived values.

Consider the following code:

public static class StringExtensions
    public static string ReverseString(this string str)
        return string.Join("", str.Reverse().ToArray());

public class NaiveTest
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned()
        const string str = "a string";
        var result = str.ReverseString();
        Assert.That(result, Is.EqualTo("gnirts a"));

public class DerivedValueTest
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned()
        const string str = "a string";
        var expectedResult = str.Reverse().ToArray();

        var result = str.ReverseString();

        Assert.That(result, Is.EqualTo(expectedResult));

public class DataDrivenTest
    [TestCase("", "")]
    [TestCase("a", "a")]
    [TestCase("ab", "ba")]
    [TestCase("longer", "regnol")]
    [TestCase("a string with space", "ecaps htiw gnirts a")]
    [TestCase("num3rics&punctua10n!@$", "$@!n01autcnup&scir3mun")]
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned(string input, string expectedResult)
        var result = input.ReverseString();

        Assert.That(result, Is.EqualTo(expectedResult));

This is a fairly contrived example, but it helps illustrate a few things:

  • The NaiveTest is hard to infer understanding at a glance – you can eventually reason an understanding about the relationship between the input and the output because of the name of the test in combination with common sense, but it’s not easy and thus I think it’s not a great test (it’s still clear AAA so it’s certainly not awful).
  • The DerivedValueTest is what Mark was describing – this is much better because the relationship between the input and result is very clear in the first two lines of the test and you immediately know a) what is being tested and b) how it should work.
    • Of note is that the implementation is the same as the real implementation – this could be a problem if the developer decides to simply copy the implementation into the test or vice versa
      • Interestingly, by writing the test using proper TDD it wouldn’t matter as much that the implementation was similar to the implementation because in writing the test you would see it fail in the “Red” step and at that time verify that the string being asserted in the test output was in fact the correct reverse string
      • The fact you are then relying on the developer verifying that the result being asserted was correct at the time of writing the test reminds me somewhat of the notion of approval tests (which I find myself using a fair bit to perform complex assertions that can’t easily be expressed in code, but can be easily reasoned “by eye”)
      • It occurs to me that if you were only testing a subset of some complex functionality that you would only need to include a subset of the implementation for the test
      • If you have a team that isn’t disciplined in writing their tests in a TDD fashion (or at least verifying the test is definitely correct) then this approach might make it easier to introduce incorrect tests that are a copy of the implementation and don’t actually test anything (hopefully code review would pick this up though)
    • Where possible you could try and include an alternate implementation of the code under test in your test (with a focus on the implementation being readable and understandable), but even in this case I still think the “Red” step mentioned above is important to make sure you didn’t have an error in your alternative implementation
  • The DataDrivenTest in my opinion is the better test in this case, not just because it provides better code coverage by trying multiple values (since this could easily have been done for the derived value test as well), but also because:
    • The relationship between input and output is made clear by their proximity and the fact that there are simple examples as well as more complex ones (the simple ones help the viewer immediately grok the relationship)
      • I feel that the “proximity” part is the most important bit here (assuming that you can grok the relationship)
      • I think the proximity in the DerivedValueTest is an important factor as well to help with immediate understanding
    • I suspect the edge cases in the above example could go into their own test so that the test name can more clearly reflect the edge case being tested
    • This approach won’t work for all situations – sometimes the logic being tested is complex enough that having the input and expected result side-by-side still won’t allow the reader to glean understanding about the relationship and it’s important to how show the expected result is derived
    • Be pragmatic – use the right approach in the right situation – derived values is sometimes useful and sometimes showing a series of {input -> expected result} is clearer – I’d say the main thing to be wary of is tests that simply have a value in the final assert and it’s not clear how that value was derived

There is a slight variation to the DataDrivenTest above that I sometimes come across that is also worth mentioning – complex example generation. It’s a strategy to avoid the situation described above where showing the derived value involves duplicating the implementation logic in situations where it’s really complex to work out that logic, but easy(ier) to come up with an example of the logic in action. I often find myself this technique for date logic – writing the date logic as part of the test never gives me a lot of confidence since it’s so darn complex to figure out (I hate programming dates/time logic). In these situations I like to pull up a calendar and pick some candidate examples for the logic I’m trying to implement.

A couple of examples are shown below, pulled from a codebase I work on (with some tweaks to generalise the second test so it’s non-identifying):

    // From
    [TestCase("2012-11-04 12:20:06", 1352031606)]
    [TestCase("2012-11-03 23:59:59", 1351987199)]
    [TestCase("2012-02-29 13:00:01", 1330520401)]
    public void GivenDate_WhenConvertingToUnixTimestamp_ItShouldBeCorrect(string inputDate, int expectedTimestamp)
        var date = DateTime.Parse(inputDate);

        var timestamp = date.ToUnixTimestamp();

        Assert.That(timestamp, Is.EqualTo(expectedTimestamp));

    /* unix $ cal 8 2004
     *      August 2004
       Su Mo Tu We Th Fr Sa
        1  2  3  4  5  6  7
        8  9 10 11 12 13 14
       15 16 17 18 19 20 21
       22 23 24 25 26 27 28
       29 30 31
    // Day before date during weekend
    [TestCase("2004-08-09", "2004-08-06", true)]
    // Day before date during week
    [TestCase("2004-08-10", "2004-08-09", true)]
    // Consecutive business days
    [TestCase("2004-08-09", "2004-08-09", true)]
    [TestCase("2004-08-09", "2004-08-10", true)]
    [TestCase("2004-08-09", "2004-08-11", false)]
    [TestCase("2004-08-09", "2004-08-12", false)]
    // Include Weekend
    [TestCase("2004-08-12", "2004-08-13", true)]
    [TestCase("2004-08-12", "2004-08-14", true)]
    [TestCase("2004-08-12", "2004-08-15", true)]
    [TestCase("2004-08-12", "2004-08-16", false)]
    // Start on Weekend
    [TestCase("2004-08-13", "2004-08-16", true)]
    [TestCase("2004-08-13", "2004-08-17", false)]
    public void WhenValidatingConnectionDate_ThenThereShouldBeAnErrorOnlyIfTheDateIsLessThan2BusinessDaysAway(string now, string date, bool expectError)
        _model.ConnectionDate = DateTime.Parse(date);
        var dateTimeProvider = DateTimeProviderFactory.Create(DateTime.Parse(now));
        var modelState = new ModelStateDictionary();

        _model.Validate(modelState, dateTimeProvider);

        if (expectError)
            Assert.That(modelState[modelStateKey].Errors, Has.Count.GreaterThan(0));
            Assert.That(modelState.ContainsKey(modelStateKey), Is.False);

Props to my colleague Toby Moore for coming up with the idea of using the Unix cal command to generate calendars for pasting in comments above the examples).

In these examples, there is a cognitive load to figure out the relationship between input and expected result, but I don’t think there is a silver bullet in these cases – the test name, multiple examples and the comments above the tests (I think) help anyone maintaining the tests to figure out what is going on. Either way there would be a cognitive load to get your head around the logic since it’s really complex and in this case it’s about trying to minimise that load.

Acceptance Tests Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Acceptance Tests Structure

When writing high level acceptance tests (as opposed to unit tests) I will always try to use separate methods for the Given, When and Then since usually the Given and possibly When are more complex so there might need to be multiple methods for each. My favourite framework for writing tests with this structure by far is Bddfy (disclaimer: I am a core contributor of the TestStack organisation). When I say high-level acceptance tests I usually refer to automated UI / full system tests, but it’s possible they could also be complex subcutaneous tests (where subcutaneous tests fit in is part of my thinking I’m not quite sure on yet).

I’ve said before and I still maintain that consistency is the most important aspect when it comes to keeping a software application maintainable so I think that within a particular set of tests if you are writing some that are single method Arrange, Act, Assert tests then you shouldn’t mix those tests with something like Bddfy since it’s wildly different. I feel that the techniques I described for structuring tests using test-per-class in the last post in the series is OK to mix with AAA tests though as I discussed in that post.

The above two paragraphs have led me to the following thoughts:

  • I keep my high-level acceptance tests in a separate project from my unit/integration/etc. tests since they are:
    • Inconsistently specified as discussed above
    • As seen below the way I structure the tests into namespaces / folders and the way I name the test class is very different too
    • They have a different purpose and intent – one is to check your implementation is sound / help design the technical implementation and the other is to specify/check the business requirements of the system i.e. the concept of an executable specification
  • If possible use something like Bddfy and a Specification base class (see below) that allows you to specify the implementation of your scenario
    • Yes I know about SpecFlow, but I don’t think that the maintenance overhead is worth it unless you actually have your product owner writing the specifications, but by all accounts I’ve heard (and based on my experiences) it’s tricky for them to get it right and the developers end up writing the scenarios anyway – do yourself a favour and use a framework that is built for developers and get the devs to sit with the product owner to write the test – that’s what they are good at!
    • One of the many cool things about Bddfy is its reporting functionality – out of the box it generates a HTML report, but it’s also flexible enough to allow you to define your own output; I think this fits in nicely with the idea of an executable specification
  • I’ve used the base class shown below to make it really easy to define Bddfy tests (then you simply need to specify methods according to the Bddfy conventions and it will automatically pick them up
    • If you want to reuse Given’s, When’s or Then’s then simply use inheritance (e.g. you might define an AuthenticationUserSpecification that extends Specification and provides a GivenUserIsAuthenticated method)
    • If you need to use a data-driven test then see below for an example of how to do it

Basic Specification base class

You can add any setup / teardown that is required to this for your purposes or wrap the run method code if needed (e.g. for automated UI tests I catch any exception from the this.BDDfy() call and take a screenshot).

public abstract class Specification
    public virtual void Run()

For a more advanced example of this base class including a generic version that identifies the SUT and provides auto-mocking (in this case the tests are unit tests rather than acceptance tests) check out the TestStack.Seleno tests.

Example of extracting common parts of scenarios

Here is an example of how you might pull out common logic into a customised specification base class. In this case it is demonstrating what it might look like to have a base class for when the scenario starts with the user not being logged in when using the TestStack.Seleno library.

public abstract class UnauthenticatedSpecification : Specification
    protected MyAppPage InitialPage;

    public void GivenNotLoggedIn()
        InitialPage = Host.Instance

Example test class

Here is an example of what an actual test class might look like. In this case it is a test that ensures a user can register for an account on the site and extends the example UnauthenticatedSpecification above. There is some code missing here about how the page objects are implemented, but it’s fairly readable so you should be able to follow what’s happening:

public class ValidRegistration : UnauthenticatedSpecification
    private RegistrationViewModel _model;
    private MyAppPage _page;
    private Member _member;

    public void AndGivenValidSubmissionData()
        _model = GetValidViewModel();

    public void AndGivenUserHasntAlreadySignedUp()
        // In this instance the Specification base class has helpfully exposed an NHibernate Session object,
        //  which is connected to the same database as the application being tested
        Session.Delete(Session.Query<Member>().SingleOrDefault(m => m.Email == _model.Email));

    public void WhenVisitingTheJoinPageFillingInTheFormAndSubmitting()
        _page = InitialPage

    public void ThenTheMemberWasCreated()
        _member = Session.Query<Member>()
            .Single(m => m.Email == _model.Email);

    public void AndTheMembersNameIsCorrect()

    public void AndTheMemberCanLogIn()
            .Successfully(_model.Email, _model.Password);

    private RegistrationViewModel GetValidViewModel()
        // This is using the NBuilder library, which automatically populates public properties with values
        // I'm overriding the email and password to make them valid for the form submission
        return Builder<RegistrationViewModel>.CreateNew()
            .With(x => x.Email = "email@domain.tld")
            .With(x => x.Password = "P@ssword!")

Demonstration of data-driven test

I mentioned above that it was possible to have a data-driven test with the Specification base class. I can do this by using a hack that Jake Ginnivan showed me when we were creating acceptance tests for GitHubFlowVersion for XUnit and a technique I have found works for NUnit. I’ve put a full code example in a gist, but see below for an abridged sample.

I should note I’ve only tried these techniques with the ReSharper test runner so it’s possible they function differently with other runners. Another option would be to not inherit the base class for the situations where you need data-driven tests, but I typically find that I would then loose a bunch of useful setup/functionality when doing that; YMMV.


You might have noticed I set the Run method in the Specification base class to virtual. If you do this then you can do the following in your test class and it will ignore the test method in the base class and use the data-driven one in this class – it does show the base test up as being ignored though – you could also assign a category to the test and not run that category:

    public void RunSpecification(int someValue)
        _privatePropertyTheTestMethodsUse = someValue;

    protected override void Run() { }


You don’t have to have the base Run method virtual for this one (but it doesn’t hurt if it is either). Also, the test from the base class comes up as Inconclusive in the ReSharper test runner (just ignore it):

    public void RunSpecification(int someValue)
        _privatePropertyTheTestMethodsUse = someValue;

    public new void Run() {}

Namespaces / folder structure

When structuring the acceptance test project I like to have a folder/namespace called Specifications and underneath that a folder for each feature being tested. Then there can be a file within each feature for each scenario being tested (I generally name the scenario with a short name to easily identify it).

So, for instance the valid registration test above might be found at Specifications.Registration.ValidRegistration.

General Test Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

General Test Structure

Recently I found myself experimenting with demon-coder, fellow Readifarian, and all-round-nice-guy Ben Scott (o/ – that’s for Ben; he will understand) with trying to extract shared Given, When and/or Then sections of tests by using a pattern whereby the Given and possibly When of a test happens in the test setup (or constructor if using XUnit). This allows each assertion (or Then) that you want to make to be in a separate test method and avoids both the problem of multiple somewhat/completely unrelated asserts in a single test and massive duplication of test setup.

I realise there are other ways of dealing with the duplication such as abstracting common logic into private methods (this is the technique I have used in the past), but I’ve found the above solution to be much nicer / cleaner and clearer in intent. It’s also worth noting that there are frameworks that directly support testing with the Given, When, Then structure and I cover that further in a later post in this series (as well as why I don’t just use them all the time).

I’ve created a set of code samples to illustrate the different techniques at

  • 01_Implementation.cs contains an example mapper class to be tested – it is mapping from some sort of measurement of a value that has been taken (say, from an API or a hardware device) with a confidence and some kind of identifier to an object that represents the identity (broken down into component parts), the measurement value and an adjusted value (based on the confidence). It’s a fairly contrived example, but hopefully it’s understandable enough that it illustrates my point.
  • 02_MultipleAsserts.cs shows a single Arrange Act Assert style test that contains multiple asserts for each of the mapped properties and has description strings for each assert so that you can tell which assert was the one that failed.
  • 03_RelatedMultipleAsserts.cs partially addresses the main problem in the first test (multiple somewhat unrelated asserts) by grouping the related asserts together – this introduces repetition in the tests though.
  • 04_AbstractedCommonCode reduces the duplication by abstracting out the common logic into private methods and provides a separate test for each assert – because of this the assertions don’t need a description (since there is one assert per test), but the test class is a lot more verbose and there is duplication both across test names and test content.
  • 05_GivenAndWhenInSetup.cs demonstrates the technique I describe above; in this case the constructor of the class (it also could have been a [SetUp] method) contains both the Arrange/Given and Act/When part of the test and each test method is both very slim (containing just the assertion) and is named with just the Then.
    • In NUnit you can use [SetUp] and in XUnit the constructor, but it’s worth noting that this means that code will run for every test (which is usually fine), however if the code you are testing takes a reasonable amount of time to run then you can use a constructor or [TestFixtureSetUp] in NUnit or a static constructor (if you need to also use the constructor (say, to perform the When or allow for slightly different behaviour across inherited classes when using inheritance) since the fixtures get injected after the constructor is called) or IUseFixture<TFixture> in XUnit.
    • It’s worth noting this class is still a lot bigger than the single AAA test , but for a situation like this I feel it’s a lot more understandable and better encapsulates the behaviour being tested; this technique also comes into it’s own when there is duplication of Given and/or When logic across multiple AAA tests (you can use inheritance to reuse or extend the Given/When code)
  • 06_ConcernForBaseClass.cs shows a variation of the above strategy that provides a common base class that can be used in tests to help make the Given and When code more explicit as well as identifying what the subject under test is.

Another technique to reduce the setup of tests is to use AutoFixture’s auto data attributes (XUnit and NUnit are supported) to inject any variables that you need into the test. I’m not going to cover that in this post at this stage because a) I haven’t used it in anger and b) I suspect it’s too complex for teams that aren’t all experienced (i.e. it’s a bit too “magic”). It is very cool though so I highly encourage you to at least evaluate it as an option.

There are a number of things to note about what I’ve been doing with this technique so far:

  • We still managed to keep the convention of having a test described by Given (where appropriate; sometimes you don’t need the Given), When and Then by ensuring that the combination of the class name and the test name cater for them
    • e.g. a class called GivenX_WhenY or just WhenY with a bunch of test methods called ThenZ or class called GivenX with a bunch of test methods called WhenY_ThenZ
  • You can reuse and extend common parts of your test logic by using inheritance e.g. define a base class with common Given and/or When and extend it for different variations of the Given and either reuse Then’s from the base class and/or define new ones for each inheriting class
    • I’ve seen some people use this technique with 5 levels deep of nested classes with one line of code being reused in some of the hierarchy; while I’m a massive fan of DRY I think that there is a point where you can go overboard and in the process lose understandability – 5 level deep nested test classes is in my opinion completely overboard
  • An interesting side-effect of this approach is that rather than having one test class per thing being tested you end up with multiple classes
    • I have always followed a convention of putting a test file in the same namespace as the class being tested (except in the Test project and post-fixing with Tests) e.g. MyApp.Commands.CreateInvoiceCommand would be tested by MyApp.Tests.Commands.CreateInvoiceCommandTests, but that doesn’t work when you have multiple test files so what I’ve taken to doing is making a folder at MyApp.Tests\Commands\CreateInvoiceCommandTests, which contains all the test classes
    • This allows me to mix and match folders (where needed) and single test classes (sometimes a single class is enough so I don’t bother creating a folder) while keeping it clear and consistent how and where to find the tests
  • Not all tests need to be written in this way – sometimes a simple, single AAA test (or a bunch of them) is enough
    • As you can see from the above code samples the AAA tests tend to be terser and quicker to write so when there isn’t much duplicated logic across tests and you only need one assertion or the assertions are one logical assertion (i.e. belong in a single test) then there is no reason to stray from single AAA tests in my opinion
    • I don’t feel that a combination of AAA tests and the common setup tests cause a consistency issue because it’s fairly easy to trace what’s happening and the common setup logic falls within the normal bounds of what you might have for AAA tests anyway
    • I find it’s easier to create data-driven tests using single method tests because you have so much more flexibility to do that using NUnit and XUnit – if you find a good way to do it with test-per-class rather than test-per-method let me know!
      • NUnit’s TestFixture attribute goes some of the way, but it’s not as powerful as something like TestCase orTestCaseSource
  • There is a base class (ConcernFor) that we were using for some of the tests that I included in the last code sample above

ChameleonForms 1.0 Released

I’m incredibly excited and proud to finally announce the release of 1.0 of the ChameleonForms library I’ve been working on with Matt Davies. My blog has been fairly quiet the last couple of months while I’ve poured time and energy in finally getting ChameleonForms to 1.0.


(Apologies; I’m releasing months of built-up anticipation and excitement here).

I’m biased of course, but I think this library is amazing to use and results in immensely more maintainable form generation code when using ASP.NET MVC. It extends on the knowledge that MVC developers would have in generating forms using the (already pretty awesome) built-in stuff, but adds the things I think are missing. For me, this library epitomises over 7 years of exploration in the best way to do web-based forms and I’m excited to be able to share the beginnings of my current vision via this library.

What is ChameleonForms?

In short, ChameleonForms takes away the pain and repetition of building forms with ASP.NET MVC by following a philosophy of:

  • Model-driven defaults (e.g. enum is drop-down, [DataType(DataType.Password)] is password textbox)
  • DRY up your forms – your forms will be quicker to write and easier to maintain and you won’t get stuck writing the same form boilerplate markup form after form after form
  • Consistent – consistency of the API and form structure within your forms and consistency across all forms in your site via templating
  • Declarative syntax – specify how the form is structured rather than worrying about the boilerplate HTML markup of the form; this has the same beneficial effect as separating HTML markup and CSS
  • Beautiful, terse, fluent APIs – it’s a pleasure to read and write the code
  • Extensible and flexible core – you can extend or completely change anything you want at any layer of ChameleonForms and you can drop out to plain HTML at any point in your form for those moments where pre-prepared field types and templates just don’t cut it

More info.

What are the big improvements in 1.0?

We’ve been releasing pretty often so that depends on what version you are currently using, but these are the most important things:

  • Extensive usage across a number of production websites – we are happy that this library is mature, stable and ready for prime-time
  • Twitter Bootstrap 3 Template out-of-the-box supported by a NuGet package to get you up and running faster – this is HUGE for a number of reasons:
    • Bootstrap is pretty darn popular right now so this is immediately useful to a lot of people
    • In creating this template we had to do some pretty sophisticated changes to allow the template to drive a lot of changes unobtrusively to the form structure you are adding in your views – this is great because it means it’s really easy for you to create your own form templates and accomplish similarly complex transformations of your form markup
    • The ASP.NET MVC templates that come with Visual Studio 2013 come with Bootstrap by default now – and boy do they have gross repetitive boilerplate in them, which you can clean right up using this library
    • The vision that we have for this library is coming to fruition, which is personally gratifying – this is a beautiful demonstration of being able to declaratively specify the structure of your form and then completely change the markup/template of your form across a whole application with a single line of code when it changes
  • Really comprehensive documentation of everything in the library – we’ve spent many hours writing up the documentation – the idea was to make it comprehensive, but accessible/terse; hopefully we’ve met that goal!
  • Really solid code coverage to help prevent regressions or breaking changes as well as some refactorings that give us a solid codebase to continue with the other features we want to add – hopefully this can support us into the future with minimal breaking changes

How can I get it?

Checkout the GitHub release or go to NuGet.


From this point on we are following semver thanks to the GitHubFlowVersion project. The fourth number in the NuGet version number is actually build metadata.

Borderlands 2 here I come!

Over a year ago now (yes it’s been a long journey – our first NuGet package was published on November 1, 2012) Matt Davies and I made an agreement to each other that neither of us would play the recently released Borderlands 2 game (we were both huge fans of Borderlands so this was a big deal) until we released 1.0 of ChameleonForms so that we would remain focussed on it and not get distracted. Now, while we both didn’t realise that it would take this long and while the last couple of months have seemed like forever (I’m pretty sure we had a phone conversation at least once a week where one of us would say “dude, we are sooooo close to 1.0 and BORDERLANDS 2!”) we are both incredibly proud of the library and are happy with what we’ve managed to get into 1.0.

Needless to say, we will probably be taking a break from open source for a few weekends to play Borderlands 2 🙂

We hope you enjoy using the library!

As usual hit us up with issues and pull requests on GitHub – they make our day 🙂

NQUnit update

I’ve just pushed a new version of NQUnit and NQUnit.NUnit (1.0.5). This update updates the libraries to use the latest version of Watin, NUnit, jQuery and QUnit. I’ve also made some documentation improvements. You can see the latest code and documentation on GitHub.

It’s worth noting that while NQUnit still works perfectly well, there is a better option that I usually recommend in preference: Chutzpah. I’ve updated NQUnit as per pull requests that the library received – while there are still people using the library I’m happy to keep it up to date.

Test Harness for NuGet install PowerShell scripts (init.ps1, install.ps1, uninstall.ps1)

One thing that I find frustrating when creating NuGet packages is the debug experience when it comes to creating the PowerShell install scripts (init.ps1, install.ps1, uninstall.ps1).

In order to make it easier to do the debugging I’ve created a test harness Visual Studio solution that allows you to make changes to a file, compile the solution, run a single command in the package manager and then have the package uninstall and then install again. That way you can change a line of code, do a few key strokes and then see the result straight away.

To see the code you can head to the GitHub repository. The basic instructions are on the readme:

  1. [Once off] Checkout the code
  2. [Once off] Create a NuGet source directory in checkout directory
  3. Repeat in a loop:
    1. Write the code (the structure of the solution is the structure of your nuget package, so put the appropriately named .ps1 scripts in the tools folder)
    2. Compile the solution (this creates a nuget package in the root of the solution with the name Package and version {yymm}.{ddHH}.{mmss}.nupkg – this means that the package version will increase with time so if you install from that directory it will always install the latest build) <F6> or <Ctrl+Shift+B)
    3. Switch to the Package Manager Console <Ctrl+P, Ctrl+M>
    4. [First time] Uninstall-Package Package; Install-Package Package <enter> / [Subsequent times] <Up arrow> <enter>
  4. When done simply copy the relevant files out and reset master to get a clean slate

Other handy hints

The Idempotency issue when retrying commands with Azure SQL Database (SQL Azure)

There is a lot of information available about dealing with transient errors that occur when using Azure SQL Database. If you are using it then it’s really important to take the transient errors into account since it’s one of the main differences that Azure SQL has when compared to SQL Server.

If you are using .NET then you are in luck because Microsoft have provided an open source library to detect and retry for these transient errors (The Transient Fault Handling Application Block). I have blogged previously about how the guidance that comes with the application block (along with most of the posts, tutorials and forum posts about it) indicate that you need to completely re-architect your system to wrap every single database statement with a retry.

I wasn’t satisfied with that advice and hence I created NHibernate.SqlAzure and more recently ReliableDbProvider (works with ADO.NET, EntityFramework, LinqToSql, etc.). These frameworks allow you to drop in a couple of lines of configuration at one place in your application and unobtrusively get transient fault handling in your application.

Easy right? A silver bullet even? Unfortunately, no.

The Idempotency issue

Today I was made aware of a post made by a Senior Program Manager on the SQL Server team that was posted a few months ago about the Idempotency issue with Azure SQL Database. Unfortunately, I haven’t been able to find any more information about it – if you know anything please leave a comment.

The crux of the problem is that it is possible for a transient error to be experienced by the application when in fact the command that was sent to the server was successfully processed. Obviously, that won’t have any ill-effect for a SELECT statement, and if the SELECT is retried then there is no problem. When you have write operations (e.g. INSERTs, UPDATEs and DELETEs) then you can start running into trouble unless those commands are repeatable (i.e. idempotent).

This is particularly troubling (although in retrospect not terribly surprising) and the frustrations of one of the commenters from the post sums up the situation fairly well (and in particular calls out how impractical the suggested workaround given in the post is):

How exactly would this work with higher abstraction ORMs such as Entity Framework? The updates to a whole entity graph are saved as a whole, along with complex relationships between entities. Can entity updates be mapped to stored procedures such as this in EF? I completely appreciate this post from an academic perspective, but it seems like an insane amount of work (and extremely error-prone) to map every single update/delete operation to a stored procedure such as this.


After giving it some consideration and conferring with some of my colleagues, I can see a number of ways to deal with this (you could do something like what was suggested in the post linked to above, but frankly I don’t think it’s practical so I’m not including it). If you have any other ideas then please leave a comment below.

  1. Do nothing: transient faults (if you aren’t loading the database heavily) are pretty rare and within that the likelihood of coming across the idempotency issue is very low
    • In this case you would be making a decision that the potential for “corrupt” data is of a lower concern than application complexity / overhead / effort to re-architect
    • If you do go down this approach I’d consider if there is some way you can monitor / check the data to try and detect if any corruption has occurred
    • Unique keys are your friend (e.g. if you had a Member table with an identity primary key and some business logic that said emails must be unique per member then you can use a unique key on Member.Email to protect duplicate entries)
  2. Architect your system so that all work to the database is abstracted behind some sort of unit of work pattern and that the central code that executes your unit of work contains your retry logic
    • For instance if using NHibernate you could throw away the session on a transient error, get another one and retry the unit of work
    • While this ensures the integrity of your transactions it does have the potential side-effect of making everything a lot slower since any transient errors will cause the whole unit of work to retry (which could potentially be slow)
  3. Ensure all of your commands are idempotent
    • While on the surface this doesn’t sound much better than having to wrap all commands with transient retry logic it can be quite straightforward depending on the application because most update and delete commands are probably idempotent already
    • Instead of using database-generated identities for new records use application generated identities (for instance generate a GUID and assign it to the id before inserting an entity) and then your insert statements will also be idempotent (assuming the database has a primary key on the id column)
    • NHibernate has automatic GUID generation capabilities for you and you can use the Comb GUID algorithm to avoid index fragmentation issues within the database storage
    • Alternatively, you can use strategies to generate unique integers like HiLo in NHibernate or SnowMaker
    • If you are doing delete or update statements then you simply need to ensure that they can be executed multiple times with the same result – e.g. if you are updating a column based on it’s current value (e.g. UPDATE [table] SET [column] = [column] + 1 WHERE Id = [Id]) then that could be a problem if it executed twice
  4. Retry for connections only, but not commands
  5. Retry for select statements only, but not others (e.g. INSERT, UPDATE, DELETE)
  6. Don’t use Azure SQL, but instead use SQL Server on an Azure Virtual Machine


With all that in mind, here are my recommendations:

  • Don’t shy away from protecting against transient errors – it’s still important and the transient errors are far more likely to happen than this problem
  • Use application-generated ids for table identifiers
  • Consider what approach you will take to deal with the idempotency issue (as per above list)