Whitepaper: Managing Database Schemas in a Continuous Delivery World

A whitepaper I wrote for my employer, Readify, just got published. Feel free to check it out. I’ve included the abstract below.

One of the trickier technical problems to address when moving to a continuous delivery development model is managing database schema changes. It’s much harder to to roll back or roll forward database changes compared to software changes since by definition it has state. Typically, organisations address this problem by having database administrators (DBAs) manually apply changes so they can manually correct any problems, but this has the downside of providing a bottleneck to deploying changes to production and also introduces human error as a factor.

A large part of continuous delivery involves the setup of a largely automated deployment pipeline that increases confidence and reduces risk by ensuring that software changes are deployed consistently to each environment (e.g. dev, test, prod).

To fit in with that model it’s important to automate database changes so that they are applied automatically and consistently to each environment thus increasing the likelihood of problems being found early and reducing the risk associated with database changes.

This report outlines an approach to managing database schema changes that is compatible with a continuous delivery development model, the reasons why the approach is important and some of the considerations that need to be made when taking this approach.

The approaches discussed in this document aren’t specific to continuous delivery and in fact should be considered regardless of your development model.

Review of: Jimmy Bogard – Holistic Testing

This post discusses the talk “Holistic Testing” by Jimmy Bogard, which was given in June 2013. See my introduction post to get the context behind this post and the other posts I have written in this series.

I really resonate with the points raised by Jimmy since I’ve been using a lot of similar techniques recently. In this article I outline how I’ve been using the techniques talked about by Jimmy (including code snippets for context).

Overview

In this insightful presentation Jimmy outlines the testing strategy that he tends to use for the projects he works on. He covers the level that he tests from, the proportion of the different types of tests he writes and covers the intimate technical detail about how he implements the tests. Like Ian Cooper, Jimmy likes writing his unit tests from a relatively high level in his application, specifically he said that he likes the definition of unit test to be:

“Units of behaviour, isolated from other units of behaviour”

Code coverage and shipping code

“The ultimate goal here is to ship code it’s not to write tests; tests are just a means to the end of shipping code.”

“I can have 100% code coverage and have noone use my product and I can have 0% code coverage and it’s a huge success; there is no correlation between the two things.”

Enough said.

Types of tests

Jimmy breathes a breath of fresh air when throwing away the testing pyramid (and all the conflicting definitions of unit, integration, etc. tests) in favour of a pyramid that has a small number of “slow as hell tests”, a slightly larger number of “slow” tests and a lot of “fast” tests.

This takes away the need to classify if a test is unit, integration or otherwise and focuses on the important part – how fast can you get feedback from that test. This is something that I’ve often said – there is no point in distinguishing between unit and integration tests in your project until the moment that you need to separate out tests because your feedback cycle is too slow (which will take a while in a greenfield project).

It’s worth looking at the ideas expressed by Sebastien Lambla on Vertical Slide Testing (VEST), which provides another interesting perspective in this area by turning your traditionally slow “integration” tests into fast in-memory tests. Unfortunately, the idea seems to be fairly immature and there isn’t a lot of support for this type of approach.

Mocks

Similar to the ideas expressed by Ian Cooper, Jimmy tells us not to mock internal implementation details (e.g. collaborators passed into the constructor) and indicates that he rarely uses mocks. In fact he admitted that he would rather make the process of using mocks more painful and hand rolling them to discourage their use unless it’s necessary.

Jimmy says that he creates “seams” for the things he can’t control or doesn’t own (e.g. webservices, databases, etc.) and then mocks those seams when writing his test.

The cool thing about hand-rolled mocks that I’ve found is that you can codify real-like behaviour (e.g. interactions between calls and real-looking responses) and contain that behaviour in one place (helping to form documentation about how the thing being mocked works). These days I tend to use a combination of hand-rolled mocks for some things and NSubstitute for others. I’ll generally use hand-rolled mocks when I want to codify behaviour or if I want to provide a separate API surface area to interact with the mock e.g.:

// Interface
public interface IDateTimeProvider
{
    DateTimeOffset Now();
}

// Production code
public class DateTimeProvider : IDateTimeProvider
{
    public DateTimeOffset Now()
    {
        return DateTimeOffset.UtcNow;
    }
}

// Hand-rolled mock
public class StaticDateTimeProvider : IDateTimeProvider
{
    private DateTimeOffset _now;

    public StaticDateTimeProvider()
    {
        _now = DateTimeOffset.UtcNow;
    }

    public StaticDateTimeProvider(DateTimeOffset now)
    {
        _now = now;
    }

    // This one is good for data-driven tests that take a string representation of the date
    public StaticDateTimeProvider(string now)
    {
        _now = DateTimeOffset.Parse(now);
    }

    public DateTimeOffset Now()
    {
        return _now;
    }

    public StaticDateTimeProvider SetNow(string now)
    {
        _now = DateTimeOffset.Parse(now);
        return this;
    }

    public StaticDateTimeProvider MoveTimeForward(TimeSpan amount)
    {
        _now = _now.Add(amount);
        return this;
    }
}

Container-driven unit tests

One of the most important points that Jimmy raises in his talk is that he uses his DI container to resolve dependencies in his “fast” tests. This makes a lot of sense because it allows you to:

  • Prevent implementation detail leaking into your test by resolving the component under test and all of it’s real dependencies without needing to know the dependencies
  • Mimic what happens in production
  • Easily provide mocks for those things that do need mocks without needing to know what uses those mocks

Container initialisation can be (relatively) slow so in order to ensure this cost is incurred once you can simply set up a global fixture or static instance of the initialised container.

The other consideration is how to isolate the container across test runs – if you modify a mock for instance then you don’t want that mock to be returned in the next test. Jimmy overcomes this by using child containers, which he has separately blogged about.

The other interesting thing that Jimmy does is uses an extension of AutoFixture’s AutoDataAttribute attribute to resolve parameters to his test method from the container. It’s pretty nifty and explained in more detail by Sebastian Weber.

I’ve recently used a variation of the following test fixture class (in my case using Autofac):

public static class ContainerFixture
{
    private static readonly IContainer Container;

    static ContainerFixture()
    {
        Container = ContainerConfig.CreateContainer(); // This is what my production App_Start calls
        AppDomain.CurrentDomain.DomainUnload += (sender, args) => Container.Dispose();
    }

    public static ILifetimeScope GetTestLifetimeScope(Action<ContainerBuilder> modifier = null)
    {
        return Container.BeginLifetimeScope(MatchingScopeLifetimeTags.RequestLifetimeScopeTag, cb => {
            ExternalMocks(cb);
            if (modifier != null)
                modifier(cb);
        });
    }

    private static void ExternalMocks(ContainerBuilder cb)
    {
        cb.Register(_ => new StaticDateTimeProvider(DateTimeOffset.UtcNow.AddMinutes(1)))
            .AsImplementedInterfaces()
            .AsSelf()
            .InstancePerTestRun();
        // Other overrides of externals to the application ...
    }
}

public static class RegistrationExtensions
{
    // This extension method makes the registrations in the ExternalMocks method clearer in intent - I create a HTTP request lifetime around each test since I'm using my container in a web app
    public static IRegistrationBuilder<TLimit, TActivatorData, TStyle> InstancePerTestRun
        <TLimit, TActivatorData, TStyle>(this IRegistrationBuilder<TLimit, TActivatorData, TStyle> registration,
            params object[] lifetimeScopeTags)
    {
        return registration.InstancePerRequest(lifetimeScopeTags);
    }
}

Isolating the database

Most applications that I come across will have a database of some sort. Including a database connection usually means out of process communication and this likely turns your test from “fast” to “slow” in Jimmy’s terminology. It also makes it harder to write a good test since databases are stateful and thus we need to isolate tests against each other. It’s often difficult to run tests in parallel against the same database as well.

There are a number of ways of dealing with this, which Jimmy outlined in his talk and also on his blog:

  1. Use a transaction and rollback at the end of the test. The tricky thing here is making sure that you simulate multiple requests – you need to make sure that your seeding, work and verification all happen separately otherwise your ORM caching might give you a false positive. I find this to be quite an effective strategy and it’s what I’ve used for years now in various forms.
    • One option is to use TransactionScope to transparently initiate a transaction and rollback that allows multiple database connections to connect to the database and you can have real, committed transactions that will then get rolled back. The main downsides are that you need MSDTC enabled on all dev machines and your CI server agents and you can’t run tests in parallel against the same database.
    • Another option is to initiate a single connection with a transaction and then to reuse that connection across your ORM contexts – this allows you to avoid MSDTC and run tests in parallel, but it also means you can’t use explicit transactions in your code (or to make them noops for your test code) and it’s not possible with all ORMs. I can’t claim credit for this idea – I was introduced to it by Jess Panni and Matt Davies.
    • If your ORM doesn’t support attaching multiple contexts to a single connection with an open transaction (hi NHibernate!) then another option would be to clear the cache after seeding and after work. This has the same advantages and disadvantages as the previous point.
  2. Drop/recreate the database each test run.
    • The most practical way to do this is to use some sort of in-memory variation e.g. sqlite in-memory, Raven in-memory, Effort for Entity Framework and the upcoming Entity Framework 7 in-memory provider
      • This has the advantage of working in-process and thus you might be able to make the test a “fast” test
      • This allows you to run tests in parallel and isolated from each other by wiping the database every test run
      • The downside is the database is very different from your production database and in fact might not have some features your code needs
      • Also, it might be difficult to migrate the database to the correct schema (e.g. sqlite doesn’t support ALTER statements) so you are stuck with getting your ORM to automatically generate the schema for you rather than testing your migrations
      • Additionally, it can actually be quite slow to regenerate the schema every test run as the complexity of your schema grows
  3. Delete all of the non-seed data in the database every test run – this can be quite tricky to get right without violating foreign keys, but Jimmy has some clever SQL scripts for it (in the above-linked article) and finds that it’s quite a fast option.
  4. Ensure that the data being entered by each test will necessarily be isolated from other test runs (e.g. random GUID ids etc.) – the danger here is that it can get quite complex to keep on top of this and it’s likely your tests will be fragile – I generally wouldn’t recommend this option.

I generally find that database integration tests are reasonably fast (after the initial spin-up time for EntityFramework or NHibernate). For instance, in a recent project the first database test would take 26s and subsequent tests took ~15ms for a test with an empty database query, ~30-50ms for a fairly basic query test with populated data and 100-200ms for a more complex test with a lot more database interaction.

In some cases I will write all of my behavioural tests touching the database because the value of testing against a production-like database with the real SQL being issued against the real migrations is incredibly valuable in terms of confidence. If you are using a DI container in your tests I’m sure that it would be possible to run the test suite in two different modes – one with an in-memory variant and parallelisation to get fast feedback and one with full database integration for full confidence. If you had a project that was big enough that the feedback time was getting too large investigating this type of approach is worth it – I personally haven’t found a need yet.

I’ve recently been using variations on this fixture class to set up the database integration for my tests using Entity Framework:

public class DatabaseFixture : IDisposable
{
    private readonly MyAppContext _parentContext;
    private readonly DbTransaction _transaction;

    static DatabaseFixture()
    {
        var testPath = Path.GetDirectoryName(typeof (DatabaseFixture).Assembly.CodeBase.Replace("file:///", ""));
        AppDomain.CurrentDomain.SetData("DataDirectory", testPath); // For localdb connection string that uses |DataDirectory|
        using (var migrationsContext = new MyAppContext())
        {
            migrationsContext.Database.Initialize(false); // Performs EF migrations
        }
    }

    public DatabaseFixture()
    {
        _parentContext = new MyAppContext();
        _parentContext.Database.Connection.Open(); // This could be a simple SqlConnection if using sql express, but if using localdb you need a context so that EF creates the database if it doesn't exist (thanks EF!)
        _transaction = _parentContext.Database.Connection.BeginTransaction();

        SeedDbContext = GetNewDbContext();
        WorkDbContext = GetNewDbContext();
        VerifyDbContext = GetNewDbContext();
    }

    public MyAppContext SeedDbContext { get; private set; }
    public MyAppContext WorkDbContext { get; private set; }
    public MyAppContext VerifyDbContext { get; private set; }

    private MyAppContext GetNewDbContext()
    {
        var context = new MyAppContext(_parentContext.Database.Connection);
        context.Database.UseTransaction(_transaction);
        return context;
    }

    public void Dispose()
    {
        SeedDbContext.Dispose();
        WorkDbContext.Dispose();
        VerifyDbContext.Dispose();
        _transaction.Dispose(); // Discard any inserts/updates since we didn't commit
        _parentContext.Dispose();
    }
}

Subcutaneous testing

It’s pretty well known/documentated that UI-tests are slow and unless you get them right are brittle. Most people recommend that you only test happy paths. Jimmy classifies UI tests in his “slow as hell” category and also recommends on testing (important) happy paths. I like to recommend that UI tests are used for high value scenarios (such as a user performing the primary action that makes you money), functionality that continually breaks and can’t be adequately covered with other tests or complex UIs.

Subcutaneous tests allow you to get a lot of the value from UI tests in that you are testing the full stack of your application with its real dependencies (apart from those external to the application like web services), but without the fragility of talking to a fragile and slow UI layer. These kinds of tests are what Jimmy classifies as “slow”, and will include integration with the database as outlined in the previous sub-section.

In his presentation, Jimmy suggests that he writes subcutaneous tests against the command/query layer (if you are using CQS). I’ve recently used subcutaneous tests from the MVC controller using a base class like this:

public abstract class SubcutaneousMvcTest<TController> : IDisposable
    where TController : Controller
{
    private DatabaseFixture _databaseFixture;
    private readonly HttpSimulator _httpRequest;
    private readonly ILifetimeScope _lifetimeScope;

    protected TController Controller { get; private set; }
    protected ControllerResultTest<TController> ActionResult { get; set; }
    protected MyAppContext SeedDbContext { get { return _databaseFixture.SeedDbContext; } }
    protected MyAppContext VerifyDbContext { get { return _databaseFixture.VerifyDbContext; } }

    protected SubcutaneousMvcTest()
    {
        _databaseFixture = new DatabaseFixture();
        _lifetimeScope = ContainerFixture.GetTestLifetimeScope(cb =>
            cb.Register(_ => _databaseFixture.WorkDbContext).AsSelf().AsImplementedInterfaces().InstancePerTestRun());
        var routes = new RouteCollection();
        RouteConfig.RegisterRoutes(routes); // This is what App_Start calls in production
        _httpRequest = new HttpSimulator().SimulateRequest(); // Simulates HttpContext.Current so I don't have to mock it
        Controller = _lifetimeScope.Resolve<TController>(); // Resolve the controller with real dependencies via ContainerFixture
        Controller.ControllerContext = new ControllerContext(new HttpContextWrapper(HttpContext.Current), new RouteData(), Controller);
        Controller.Url = new UrlHelper(Controller.Request.RequestContext, routes);
    }

    // These methods make use of my TestStack.FluentMVCTesting library so I can make nice assertions against the action result, which fits in with the BDD style
    protected void ExecuteControllerAction(Expression<Func<TController, Task<ActionResult>>> action)
    {
        ActionResult = Controller.WithCallTo(action);
    }

    protected void ExecuteControllerAction(Expression<Func<TController, ActionResult>> action)
    {
        ActionResult = Controller.WithCallTo(action);
    }

    [Fact]
    public virtual void ExecuteScenario()
    {
        this.BDDfy(); // I'm using Bddfy
    }

    protected TDependency Resolve<TDependency>()
    {
        return _lifetimeScope.Resolve<TDependency>();
    }

    public void Dispose()
    {
        _databaseFixture.Dispose();
        _httpRequest.Dispose();
        _lifetimeScope.Dispose();
    }
}

Here is an example test:

public class SuccessfulTeamResetPasswordScenario : SubcutaneousMvcTest<TeamResetPasswordController>
{
    private ResetPasswordViewModel _viewModel;
    private const string ExistingPassword = "correct_password";
    private const string NewPassword = "new_password";

    public async Task GivenATeamHasRegisteredAndIsLoggedIn()
    {
        var registeredTeam = await SeedDbConnection.SaveAsync(
            ObjectMother.Teams.Default.WithPassword(ExistingPassword));
        LoginTeam(registeredTeam);
    }

    public void AndGivenTeamSubmitsPasswordResetDetailsWithCorrectExistingPassword()
    {
        _viewModel = new ResetPasswordViewModel
        {
            ExistingPassword = ExistingPassword,
            NewPassword = NewPassword
        };
    }

    public void WhenTeamConfirmsThePasswordReset()
    {
        ExecuteControllerAction(c => c.Index(_viewModel));
    }

    public Task ThenResetThePassword()
    {
        var team = await VerifyDbConnection.Teams.SingleAsync();
        team.Password.Matches(NewPassword).ShouldBe(true); // Matches method is from BCrypt
        team.Password.Matches(ExistingPassword).ShouldNotBe(true);
    }

    public void AndTakeUserToASuccessPage()
    {
        ActionResult.ShouldRedirectTo(c => c.Success);
    }
}

Note:

  • The Object Mother with builder syntax is as per my existing article on the matter
  • I defined LoginTeam in the SubcutaneousMvcTest base class and it sets the Controller.Userobject to a ClaimsPrincipal object for the given team (what ASP.NET MVC does for me when a team is actually logged in)
  • The SaveAsync method on SeedDbConnection is an extension method in my test project that I defined that takes a builder object, calls .Build and persists the object (and returns it for terseness):
    public static class MyAppContextExtensions
    {
        public static async Task<Team> SaveAsync(this MyAppContext context, TeamBuilder builder)
        {
            var team = builder.Build();
            context.Teams.Add(team);
            await context.SaveChangesAsync();
            return team;
        }
    }
    

When to use subcutaneous tests

In my experience over the last few projects (line of business applications) I’ve found that I can write subcutaneous tests against MVC controllers and that replaces the need for most other tests (as discussed in the previous post). A distinction over the previous post is that I’m writing these tests from the MVC Controller rather than the port (the command object). By doing this I’m able to provide that extra bit of confidence that the binding from the view model through to the command layer is correct without writing extra tests. I was able to do this because I was confident that there was definitely only going to be a single UI/client and the application wasn’t likely to grow a lot in complexity. If I was sure the command layer would get reused across multiple clients then I would test from that layer and only test the controller with a mock of the port if I felt it was needed.

One thing that should be noted with this approach is that, given I’m using real database connections, the tests aren’t lightning fast, but for the applications I’ve worked on I’ve been happy with the speed of feedback, low cost and high confidence this approach has gotten me. This differs slightly from the premise in Jimmy’s talk where he favours more fast as hell tests. As I talked about above though, if speed becomes a problem you can simply adjust your approach.

I should note that when testing a web api then I have found that writing full-stack tests against an in-memory HTTP server (passed into the constructor of HttpClient) are similarly effective and it tests from something that the user/client cares about (the issuance of a HTTP request).

Review of: Ian Cooper – TDD, where did it all go wrong

This post discusses the talk “TDD, where did it all go wrong” by Ian Cooper, which was given in June 2013. See my introduction post to get the context behind this post and the other posts I have written in this series.

It’s taken me quite a few views of this video to really get my head around it. It’s fair to say that this video in combination with discussions with various colleagues such as Graeme Foster, Jess PanniBobby Lat and Matt Davies has changed the way I perform automated testing.

I’ll freely admit that I used to write a unit test across each branch of most methods in the applications I wrote and rely heavily on mocking to segregate out the external dependencies of the test (all other classes). Most of the applications I worked on were reasonably small and didn’t span for multiple years so I didn’t realise the full potential pain of this approach. In saying that, I did still found myself spending a lot of time writing tests and at times it felt tedious and time consuming in a way that didn’t feel productive. Furthermore, refactoring would sometimes result in tests breaking that really shouldn’t have. As I said – the applications I worked on were relatively small so the pain was also small enough that I put up with it assuming that was the way it needed to be.

Overview

The tl;dr of Ian’s talk is that TDD has been interpreted by a lot of people to be that you should write unit tests for every method and class that you introduce in an application, but this will necessarily result in you baking implementation details into your tests causing them to be fragile when refactoring, contain a lot of mocking, result in a high proportion of test code to implementation code and ultimately slowing you down from delivering and making changes to the codebase.

Testing behaviours rather than implementations

Ian suggests that the trigger for adding a new test to the system should be adding a new behaviour rather than adding a method or class. By doing this your tests can focus on expressing and verifying behaviours that users care about rather than implementation details that developers care about.

In my eyes this naturally fits in to BDD and ATDD by allowing you to write the bulk of your tests in that style. I feel this necessarily aligns your tests and thus implementation to things that your product owner and users care about. If you buy into the notion of tests forming an important part of a system’s documentation like I do then having tests that are behaviour focussed rather than implementation focussed is even more of an advantage since they are the tests that make sense in terms of documenting a system.

TDD and refactoring

Ian suggests that the original TDD Flow outlined by Kent Beck has been lost in translation by most people. This is summed up nicely by Steve Fenton in his summary of Ian’s talk (highlight mine):

Red. Green. Refactor. We have all heard this. I certainly had. But I didn’t really get it. I thought it meant… “write a test, make sure it fails. Write some code to pass the test. Tidy up a bit”. Not a million miles away from the truth, but certainly not the complete picture. Let’s run it again.

Red. You write a test that represents the behaviour that is needed from the system. You make it compile, but ensure the test fails. You now have a requirement for the program.

Green. You write minimal code to make the test green. This is sometimes interpreted as “return a hard-coded value” – but this is simplistic. What it really means is write code with no design, no patterns, no structure. We do it the naughty way. We just chuck lines into a method; lines that shouldn’t be in the method or maybe even in the class. Yes – we should avoid adding more implementation than the test forces, but the real trick is to do it sinfully.

Refactor. This is the only time you should add design. This is when you might extract a method, add elements of a design pattern, create additional classes or whatever needs to be done to pay penance to the sinful way you achieved green.

When you do this right, you end up with several classes that are all tested by a single test-class. This is how things should be. The tests document the requirements of the system with minimal knowledge of the implementation. The implementation could be One Massive Function or it could be a bunch of classes.

Ian points out that you cannot refactor if you have implementation details in your tests because by definition, refactoring is where you change implementation details and not the public interface or the tests.

Ports and adapters

Ian suggests that one way to test behaviours rather than implementation details is to use a ports and adapters architecture and test via the ports.

There is another video where he provides some more concrete examples of what he means. He suggests using a command dispatcher or command processor pattern as the port.

That way your adapter (e.g. MVC or API controller) can create a command object and ask for it to be executed and all of the domain logic can be wrapped up and taken care of from there. This leaves the adapter very simple and declarative and it could be easily unit tested. Ian recommends not bothering to unit test the adapter because it should be really simple and I wholeheartedly agree with this. If you use this type of pattern then your controller action will be be a few lines of code.

Here is an example from a recent project I worked on that illustrates the sort of pattern:

public class TeamEditContactDetailsController : AuthenticatedTeamController
{
    private readonly IQueryExecutor _queryExecutor;
    private readonly ICommandExecutor _commandExecutor;

    public TeamEditContactDetailsController(IQueryExecutor queryExecutor, ICommandExecutor commandExecutor)
    {
        _queryExecutor = queryExecutor;
        _commandExecutor = commandExecutor;
    }

    public async Task<ActionResult> Index()
    {
        var team = await _queryExecutor.QueryAsync(new GetTeam(TeamId));
        return View(new EditContactDetailsViewModel(team));
    }

    [HttpPost]
    public async Task<ActionResult> Index(EditContactDetailsViewModel vm)
    {
        if (!await ModelValidAndSuccess(() => _commandExecutor.ExecuteAsync(vm.ToCommand(TeamId))))
            return View(vm);

        return RedirectToAction("Success");
    }
}

This is a pretty common pattern that I end up using in a lot of my applications. ModelValidAndSuccessis a method that checks the ModelState is valid, executes the command, and if there are exceptions from the domain due to invariants being violated it will propagate them into ModelState and returnfalse. vm.ToCommand() is a method that news up the command object (in this caseEditTeamContactDetails) from the various properties bound onto the view model. Side note: some people seem to take issue with to ToCommand method, personally I’m comfortable that the purpose of the view model being to bind data and translate that data to a command object – either way, it’s by no means essential to the overall pattern.

Both the query (GetTeam) and the command (EditTeamContactDetails) can be considered ports into the domain and can be tested independently from this controller using BDD tests. At that point there is probably little value in testing this controller because it’s very declarative. Jimmy Bogard sums this up nicely in one of his posts.

Ian does say that if you feel the need to test the connection between the port and adapter then you can write some integration tests, but should be careful not to test things that are outside of your control and have already been tested (e.g. you don’t need to test ASP.NET MVC or NHibernate).

Mocking

One side effect of having unit tests for every method/class is that you are then trying to mock out every collaborator of every object and that necessarily means that you are trying to mock implementation details – the fact I used a TeamRepository or a TeamService shouldn’t matter if you are testing the ability for a team to be retrieved and viewed. I should be able to change what classes I use to get the Team without breaking tests.

Using mocks of implementation details significantly increases the fragility of tests reducing their effectiveness. Ian says in his talk that you shouldn’t mock internals, privates or adapters.

Mocks still have their place – if you want to test a port and would like to isolate it from another port (e.g. an API call to an external system) then it makes sense to mock that out. This was covered further in the previous article in the “Contract and collaboration tests” section.

Problems with higher level unit tests

I’m not advocating that this style of testing is a silver bullet – far from it. Like everything in software development it’s about trade-offs and I’m sure that there are scenarios that it won’t be suitable for. Ian covered some of the problems in his talk, I’ve already talked about the combinatorial problem and Martyn Frank covers some more in his post about Ian’s talk. I’ve listed out all of the problems I know of below.

Complex implementation

One of the questions that was raised and answered in Ian’s presentation was about what to do when the code you are implementing to make a high-level unit test test pass is really complex and you find yourself needing more guidance. In that instance you can do what Ian calls “shifting down a gear” and guide your implementation by writing lower-level, implementation-focussed unit tests. Once you have finished your implementation you can then decide whether to:

  • Throw away the tests because they aren’t needed anymore – the code is covered by your higher-level behaviour test
  • Keep the tests because you think they will be useful for the developers that have to support the application to understand the code in question
  • Keep the tests because the fact you needed them in the first place tells you that they will be useful when making changes to that code in the future

The first point is important and not something that is often considered – throwing away the tests. If you decide to keep these tests the trade-off is you have some tests tied to your implementation that will be more brittle than your other tests. The main thing to keep in mind is that you don’t have to have all of your tests at that level; just the ones that it makes sense for.

In a lot of ways this hybrid approach also helps with the combinatorial explosion problem; if the code you are testing is incredibly complex and it’s too hard to feasibly provide enough coverage with a higher level test then dive down and do those low level unit tests. I’ve found this hybrid pattern very useful for recent projects and I’ve found that only 5-10% of the code is complex enough to warrant the lower level tests.

Combinatorial explosion

I’ve covered this comprehensively in the previous article. This can be a serious problem, but as per the previous section in those instances just write the lower-level tests.

Complex tests

The other point that Ian raised is that you are interacting with more objects this might mean there is more you need to set up in your tests, which then make the arrange section of the tests harder to understand and maintain and reducing the advantage of writing the tests in the first place. Ian indicates that because you are rarely setting up mocks for complex interactions he usually sees simpler arrange sections, but he mentions that the test data buider and object mother patterns can be helpful to reduce complexity too. I have covered these patterns in the past and can confirm that they have helped me significantly in reducing complexity and improving maintainability of the arrange section of the tests.

I also make use of the excellent Bddfy and Shouldly libraries and they both make a big positive difference to the terseness and understandability of tests.

Another technique that I find to be incredibly useful is Approval Tests. If you are generating a complex artifact such as a CSV, some JSON or HTML or a complex object graph then it’s really quick and handy to approve the payload rather than have to create tedious assertions about every aspect.

In my experience, with a bit of work and by diligently refactoring your test code (this is a key point!!!) as you would your production code you can get very tidy, terse tests. You will typically have a smaller number of tests (one per user scenario rather than one for every class) and they will be organised and named around a particular user scenario (e.g.Features.TeamRegistration.SuccessfulTeamRegistrationScenario) so it should be easy to find the right test to inspect and modify when maintaining code.

Multiple test failures

It’s definitely possible that you can cause multiple tests to fail by changing one thing. I’ll be honest, I don’t really see this as a huge disadvantage and I haven’t experienced this too much in the wild. When it happens it’s generally pretty obvious what the cause is.

Shared code gets tested twice

Yep, but that’s fine because that shared code is an implementation detail – the behaviours that currently use the shared code may diverge in the future and that code may no longer be shared. The fact there is shared code is probably good since it’s a probable sign that you have been diligently refactoring your codebase and removing duplication.

Review of: J.B. Rainsberger – Integrated Tests Are A Scam

This post discusses the talk “Integrated Tests Are A Scam” by J.B. Rainsberger, which was given in November 2013. See my introduction post to get the context behind this post and the other posts I have written in this series.

Overview

There are a couple of good points in this talk and also quite a few points I disagree with.

The tl;dr of this presentation is that J.B. Rainsberger often sees people deal with the problem of having “100% of tests pass, but there is still bugs” be solved by adding integrated tests (note: this isn’t misspelt, he classifies integrated tests differently from integration tests). He likens this to using “asprin that gives you a bigger headache” and says you should instead write isolated tests that test a single object at a time and have matching collaboration and contract tests on each side of the interactions that object has with its peers. He then uses mathematical induction to prove that this will completely test each layer of the application with fast, isolated tests across all logic branches with O(n) tests rather than O(n!).

He says that integrated tests are a scam because they result in you:

  • designing code more sloppily
  • making more mistakes
  • writing fewer tests because it’s harder to write the tests due to the emerging design issues
  • thus being more likely to have a situation where “100% of tests pass, but there is still bugs”

Integrated test definition

He defines an integrated test as a test that touches “a cluster of objects” and a unit test as a test that just touches a single object and is “isolated”. By this definition and via the premise of his talk every time you write a unit test that touches multiple objects then you aren’t writing a unit test, but you are writing a “self-replicating virus that invades your projects, that threatens to destroy your codebase, that threatens your sanity and your life”. I suspect that some of that sentiment is sensationalised for the purposes of making a good talk (he even has admitted to happily writing tests at a higher level), but the talk is a very popular one and it presents a very one-sided view so I feel it’s important to point that out.

I disagree with that definition of a unit test and I think that strict approach will lead to not only writing more tests than is needed, but tying a lot of your tests to implementation details that make the tests much more fragile and less useful.

Side note: I’ll be using J.B. Rainsberger’s definitions of integrated and unit tests for this post to provide less confusing discussion in the context of his presentation.

Integrated tests and design feedback

The hypothesis that you get less feedback about the design of your software from integrated tests and thus they will result in your application becoming a mess is pretty sketchy in my opinion. Call me naive, but if you are in a team that lets the code get like that with integrated tests then I think that same team will have the same result with more fine-grained tests. If you aren’t serious about refactoring your code (regardless of whether or not you use TDD and have an explicit refactor step in your process) then yeah, it’s going to get bad. In my experience, you still get design feedback when writing your test at a higher level of your application stack with real dependencies underneath (apart from mocking dependencies external to your application) and implementing the code to make it pass.

There is a tieback here to the role of TDD in software design and I think that TDD still helps with design when writing tests that encapsulate more than a single object; it influences the design of your public interface from the level you are testing against and everything underneath is just an implementation detail of that public interface (more on this in other posts in the series).

It’s worth noting that I’m coming from the perspective of a staticly typed language where I can safely create implementation details without needing fine-grained tests to cover every little detail. I can imagine that in situations where you are writing with a dynamic language you might feel the need to make up for a lack of compiler by writing more fine-grained tests.

This is one of the reasons why I have a preference for statically typed languages – the compiler obviates the need for writing mundane tests to check things like property names are correct (althoughsome people still like to write these kinds of tests).

If you are using a dynamic language and your application structure isn’t overly complex (i.e. it’s not structured with layer upon layer upon layer) then you can probably still test from a higher level with a dynamic language without too much pain. For instance, I’ve written Angular JS applications where I’ve tested services from the controller level (with real dependencies) successfully.

It’s also relevant to consider the points that @dhh makes in his post about test-induced design damage and the subsequent TDD is dead video series.

Integrated tests and identifying problems

J.B. Rainsberger says a big problem with integrated tests is that when they fail you have no idea where the problem is. I agree that by glancing at the name of the test that’s broken you might not immediately know which line of code is at fault.

If you structure your tests and code well then usually when there is a test failure in a higher level test you can look at the exception message to get a pretty good idea unless it’s a generic exception like a NullReferenceException. In those scenarios you can spend a little bit more time and look at the stack trace to nail down the offending line of code. This is slower, but I personally think that the trade-off that you get (as discussed throughout this series) it worth this small increase.

Motivation to write integrated tests

J.B. Rainsberger puts forward that the motivation people usually have to write integrated tests is to find bugs that unit tests can’t uncover by testing the interaction between objects. While it is a nice benefit to test the code closer to how it’s executed in production, with real collaborators, it’s not the main reason I write tests that cover multiple objects. I write this style of tests because they allow us to divorce the tests from knowing about implementation details and write them at a much more useful level of abstraction, closer to what the end user cares about. This gives full flexibility to refactor code without breaking tests since the tests can describe user scenarios rather than developer-focussed implementation concerns. It’s appropriate to name drop BDD and ATDD here.

Integrated tests necessitate a combinatorial explosion in the number of tests

Lack of design feedback aside, the main criticism that J.B. Rainsberger has against integrated tests is that to test all pathways through a codebase you need to have a combinatorial explosion of tests (O(n!)). While this is technically true I question the practicality of this argument for a well designed system:

  • He seems to suggest that you are going to build software that contains a lot of layers and within each layer you will have a lot of branches. While I’m sure there are examples out there like that, most of the applications I’ve been involved with can be architected to be relatively flat.
  • It’s possible that I just haven’t written code for the right industries and thus haven’t across the kinds of scenarios he is envisaging, but at best it demonstrates that his sweeping statements don’t always apply and you should take a pragmatic approach based on your codebase.
  • Consider the scenario where you add a test against new functionality against the behaviour of the system from the user’s perspective (e.g. a BDD style test for each acceptance criteria in your user story). In that scenario, then, being naive for a moment, all code that you add could be tested by these higher level tests.
  • Naivety aside, you will add code that doesn’t directly relate to the acceptance criteria, this might be infrastructure code or defensive programming, or logging etc. and in those cases I think you just need to evaluate how important it is to test that code:
    • Sometimes the code is very declarative and obviously wrong or right – in those instances, where there is unlikely to be complex interactions with other parts of the code (null checks being a great example) then I generally don’t think it needs to be tested
    • Sometimes it’s common code that will necessarily be tested by any integrated (or UI) tests you do have anyway
    • Sometimes it’s code that is complex or important enough to warrant specific, “implementation focussed”, unit tests – add them!
    • If such code didn’t warrant a test and later turns out to introduce a bug then that gives you feedback that it wasn’t that obvious afterall and at that point you can introduce a breaking test so it never happens again (before fixing it – that’s the first step you take when you have a bug right?)
    • If the above point makes you feel uncomfortable then you should go and look up continuous delivery and work on ensuring your team works towards the capability to deliver code to production at any time so rolling forward with fixes is fast and efficient
    • It’s important to keep in mind that the point of testing generally isn’t to get 100% coverage, it’s to give you confidence that your application is going to work – I talk about this more later in the series – I can think of industries where this is probably different (e.g. healthcare, aerospace) so as usual be pragmatic!
  • There will always be exceptions to the rule, if you find a part of the codebase that is more complex and does require a lower level test to feasibily cover all of the combinations then do it – that doesn’t mean you should write those tests for the whole system though.

Contract and collaboration tests

The part about J.B. Rainsberger’s presentation that I did like was his solution to the “problem”. While I think it’s fairly basic, common-sense advice that a lot of people probably follow I still think it’s good advice.

He describes that, where you have two objects collaborating with each other, you might consider one object to be the “server” and the other the “client” and the server can be tested completely independently of the client since it will simply expose a public API that can be called. In that scenario, he suggests that the following set of tests should be written:

  • The client should have a “collaboration” test to check that it asks the right questions of the next layer by using an expectation on a mock of the server
  • The client should also have a set of collaboration tests to check it responds correctly to the possible responses that the server can return (e.g. 0 results, 1 result, a few results, lots of results, throws an exception)
  • The server should have a “contract” test to check that it tries to answer the question from the client that matches the expectation in the client’s collaboration test
  • The server should also have a set of contract tests to check that it can reply correctly with the same set of responses tested in the client’s collaboration tests

While I disagree with applying this class-by-class through every layer of your application I think that you can and should still apply this at any point that you do need to make a break between two parts of your code that you want to test independently. This type of approach also works well when testing across separate systems/applications too. When testing across systems it’s worth looking at consumer-driven contracts and in particular at Pact (.NET version).

Unit, integration, subcutaneous, UI, fast, slow, mocks, TDD, isolation and scams… What is this? I don’t even!

As outlined in the first post of my Automated Testing blog series I’ve been on a journey of self reflection and discovery about how best to write, structure and maintain automated tests.

The most confusing and profound realisations that I’ve had relate to how best to cover a codebase in tests and what type and speed those tests should be. The sorts of questions and considerations that come to mind about this are:

  • Should I be writing unit, subcutaneous, integration, etc. tests to cover a particular piece of code?
  • What is a unit test anyway? Everyone seems to have a different definition!
  • How do I get feedback as fast as possible – reducing feedback loops is incredibly important.
  • How much time/effort are we prepared to spend testing our software and what level of coverage do we need in return?
  • How do I keep my tests maintainable and how do I reduce the number of tests that break when I need to make a change to the codebase?
  • How do I make sure that my tests give me the maximum confidence that when the code is shipped to production it will work?
  • When should I be mocking the database, filesystem etc.
  • How do I ensure that my application is tested consistently?

In order to answer these questions and more I’ve watched a range of videos and read a number of blog posts from prominent people, spent time experimenting and reflecting on the techniques via the projects I work on (both professionally and with my Open Source Software work) and tried to draw my own conclusions.

There are some notable videos that I’ve come across that, in particular, have helped me with my learning and realisations so I’ve created a series of posts around them (and might add to it over time if I find other posts). I’ve tried to summarise the main points I found interesting from the material as well as injecting my own thoughts and experience where relevant.

There is a great talk by Gary Bernhardt called Boundaries. For completeness, it is worth looking at in relation to the topics discussed in the above articles. I don’t have much to say about this yet (I’m still getting my head around where it fits in) apart from the fact that code that maps input(s) to output(s) without side effects are obviously very easy to test and I’ve found that where I have used immutable value objects in my domain model it has made testing easier.

Summary

I will summarise my current thoughts (this might change over time) by revisiting the questions I posed above:

  • Should I be writing unit, subcutaneous, integration, etc. tests to cover a particular piece of code?
    • Typical consultant answer: it depends. In general I’d say write the fastest possible test you can that gives you the minimum required confidence and bakes in the minimum amount of implementation details.
    • I’ve had a lot of luck covering line-of-business web apps with mostly subcutaneous tests against the MVC controllers, with a smattering of unit tests to check conventions and test really complex logic and I typically see how far I can get without writing UI tests, but when I do I test high-value scenarios or complex UIs.
  • What is a unit test anyway? Everyone seems to have a different definition!
  • How do I get feedback as fast as possible – reducing feedback loops is incredibly important.
    • Follow Jimmy’s advice and focus on writing as many tests that are as fast as possible rather than worrying about whether a test is a unit test or integration test.
    • Be pragmmatic though, you might get adequate speed, but a higher level of confidence by integrating your tests with the database for instance (this has worked well for me)
  • How much time/effort are we prepared to spend testing our software and what level of coverage do we need in return?
    • I think it depends on the application – the product owner, users and business in general will all have different tolerances for risk of something going wrong. Do the minimum amount that’s needed to get the amount of confidence that is required.
    • In general I try and following the mantra of “challenge yourself to start simple then inspect and adapt” (thanks Jess for helping refine that). Start off with the simplest testing approach that will work and if you find you are spending too long writing tests or the tests don’t give you the right confidence then adjust from there.
  • How do I keep my tests maintainable and how do I reduce the number of tests that break when I need to make a change to the codebase?
    • Focus on removing implementation details from tests. Be comfortable testing multiple classes in a single test (use your production DI container!).
    • Structure the tests according to user behaviour – they are less likely to have implementation details and they form better documentation of the system.
  • How do I make sure that my tests give me the maximum confidence that when the code is shipped to production it will work?
    • Reduce the amount of mocking you use to the bare minimum – hopefully just things external to your application so that you are testing production-like code paths.
    • Subcutaneous tests are a very good middle ground between low-level implementation-focused unit tests and slow and brittle UI tests.
  • When should I be mocking the database, filesystem etc.
    • When you need the speed and are happy to forgo the lower confidence.
    • Also, if they are external to your application or not completely under your application’s control e.g. a database that is touched by multiple apps and your app doesn’t run migrations on it and control the schema.
  • How do I ensure that my application is tested consistently?
    • Come up with a testing strategy and stick with it. Adjust it over time as you learn new things though.
    • Don’t be afraid to use different styles of test as appropriate – e.g. the bulk of tests might be subcutaneous, but you might decide to write lower level unit tests for complex logic.

In closing, I wanted to show a couple of quotes that I think are relevant:

Fellow Readifarian, Kahne Raja recently said this on an internal Yammer discussion and I really identify with it:

“We should think about our test projects like we think about our solution projects. They involve complex design patterns and regular refactoring.”

Another Readifarian, Pawel Pabich , made the important point that:

“[The tests you write] depend[s] on the app you are writing. [A] CRUD Web app might require different tests than a calculator.”

I also like this quote from Kent Beck:

“I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.”

Creating a SharePoint-style user lookup control backed by Azure AD

This post describes how to create a SharePoint-style user lookup control backed by Azure AD.

Practical Microsoft Azure Active Directory Blog Series

This post is part of the Practical Microsoft Azure Active Directory Blog Series.

SharePoint-style user lookup

If you have used SharePoint then you will likely be familiar with a user lookup control that allows you to type in someone’s name, press ctrl+k (or click on the Check Names button) and it will find the people from Active Directory and complete them for you.

This screenshot shows an example of what I’m describing:

SharePoint user lookup control

If you are using Azure AD for authentication of your application then the Graph API that it provides allows you to create a similar control.

This post provides instructions for a way to get this kind of control working. It’s based on a commit to the example repository and you can see it in action on the example website (note: the username and password to log in from the example repository homepage).

Querying the graph API

In a previous post I introduced the AzureADGraphConnection class to wrap up calls to the Azure AD Graph API. For the purposes of adding a user lookup there are two methods that are useful to add:

        public IList<User> SearchUsers(string query)
        {
            var displayNameFilter = ExpressionHelper.CreateStartsWithExpression(typeof(User), GraphProperty.DisplayName, query);
            var surnameFilter = ExpressionHelper.CreateStartsWithExpression(typeof(User), GraphProperty.Surname, query);
            var usersByDisplayName = _graphConnection
                .List<User>(null, new FilterGenerator { QueryFilter = displayNameFilter })
                .Results;
            var usersBySurname = _graphConnection
                .List<User>(null, new FilterGenerator { QueryFilter = surnameFilter })
                .Results;

            return usersByDisplayName.Union(usersBySurname, new UserComparer()).ToArray();
        }

        public User GetUser(Guid id)
        {
            try
            {
                return _graphConnection.Get<User>(id.ToString());
            }
            catch (ObjectNotFoundException)
            {
                return null;
            }
        }

        class UserComparer : IEqualityComparer<User>
        {
            public bool Equals(User x, User y)
            {
                return x.ObjectId == y.ObjectId;
            }

            public int GetHashCode(User obj)
            {
                return obj.ObjectId.GetHashCode();
            }
        }
  • The SearchUsers method searches the DisplayName and Surname properties to see if they start with the given search string
    • The search is case insensitive
    • The API didn’t seem to support an OR expression so I had to issue 2 API calls and union them together using a comparison class to remove duplicates
    • It returns a list of User objects, which leaks the internal implementation detail, you could construct a Read Model class that only contains the properties you are interested if you like, but that’s let as an exercise for the reader
  • The GetUser method allows you to pass in the ObjectId of a specific user to get back the corresponding User object
    • This is useful for post-back when you have used the user lookup control to get the ObjectId of a user

For this to work you need to ensure your Azure AD application has the permissions to read data from the directory as discussed in the last post.

Creating the user control semantics

The way the SharePoint control works is to automatically select the user if there is only one that is returned from the given search term, it highlights the search as incorrect when nobody is found for the search term, it presents a drop down list if multiple people are returned and it removes the complete person if a “resolved” name is backspaced.

In order to try and model these semantics as closely and simply as I could I created a jQuery plugin in the example repository called userlookup.

I wanted to make it useful by default so it contains a bunch of default options to get you up and running quickly, but also allows you to completely customise it’s behaviour.

The configuration options include:

  • Whether or not to show a lookup button on the right of the control and what the HTML for it should be (true by default with HTML for appending an icon in a Bootstrap 3 application)
  • What keypress(es) should prompt the lookup to occur (by default ctrl+k)
  • What classes should be given to the control in the event that a user is successfully found, no match could be found and it’s currently querying the server for users (matchfound, nomatchfound and loading by default)
  • Whether or not the id of the user should be populated into a hidden field on selection of a user and the name of the data property that contains the id of the field if so (true and data-user-lookup by default)
  • The URL of the API to call to query users (the API must accept a GET request with a query string of ?q=searchTerm and return a JSON list of users) (/api/userlookup by default)
  • The name of the property of the user’s name and id properties in the JSON returned by the API (DisplayName and ObjectId by default)
  • The options to pass to the Twitter Typeahead plugin, which is used for the drop-down selection of users (some sensible defaults by default)

One thing to note is that there is nothing specific in the plugin about Azure AD (apart from the default name and id property names, that can be overriden) so this plugin could be easily used for querying other user stores.

As mentioned above, the plugin uses Twitter Typeahead for auto-complete when multiple users are returned from the search.

In order to use the plugin you need to:

  1. Include Twitter Typeahead JavaScript in the page (note: requires you have jQuery referenced on the page)
  2. Include userlookup JavaScript in the page
  3. Include any required styling for Twitter Typeahead (e.g. for Bootstrap 3)
  4. Include any required styling for userlookup (e.g. what I have in the example repository)
  5. Create the HTML for the control on your page, e.g. using Bootstrap 3:
            <div class="form-group">
                <label class="col-md-4 control-label" for="UserName">User name</label>
                <div class="col-md-6">
                    <input type="hidden" name="UserId" id="UserId" value="@Model.UserId"/>
                    <div class="input-group">
                        <input id="UserName" name="UserName" type="text" placeholder="Enter user's name and hit ctrl+k" class="form-control" required autofocus value="@Model.UserName" data-user-lookup="UserId">
                    </div>
                </div>
            </div>
    
  6. Invoke the userlookup plugin, e.g.:
    <script type="text/javascript">
        $("[data-user-lookup]").userlookup();
        // Or you might want to configure some options, e.g. in this case the API url from a Razor MVC view
        $("[data-user-lookup]").userlookup({apiUrl: "@Url.Action("Search", "UserLookup")"});
    </script>
    

This will mean that you can type a query into the UserName field, press ctrl+k or click the lookup button and then it will guide you through “resolving” the user and when found set their id into the UserId field.

Creating the API

The API can be created using anything that can return JSON. The example project contains an ASP.NET MVC action as an example:

        public ActionResult Search(string q)
        {
            var users = _graphConnection.SearchUsers(q);

            return Json(users, JsonRequestBehavior.AllowGet);
        }

Dealing with the input on the server-side

The example repository contains an example of resolving the selected user and then re-displaying it in the same view, a slightly more realistic example is shown below:

    public class UserLookupController : Controller
    {
        ...

        public ActionResult Index()
        {
            return View(new UserLookupViewModel());
        }

        [HttpPost]
        public ActionResult Index(UserLookupViewModel vm)
        {
            vm.User = ModelState.IsValid ? _graphConnection.GetUser(vm.UserId.Value) : null;
            if (!ModelState.IsValid || vm.User == null)
                return View(vm);

            // Do stuff
        }
    }

    public class UserLookupViewModel
    {
        private User _user;

        [Required]
        public Guid? UserId { get; set; }

        [Required]
        public string UserName { get; set; }

        [ReadOnly(true)]
        public User User
        {
            get
            {
                return _user;
            }
            set
            {
                _user = value;
                if (_user == null)
                {
                    UserId = null;
                    return;
                }

                UserId = Guid.Parse(_user.ObjectId);
                UserName = _user.DisplayName;
            }
        }
    }

Summary

This blog post illustrated some example code to create a SharePoint-like user lookup control. Of course, you could also just put in Twitter Typeahead and connect that to the user lookup API as well – that would perform more API calls though since it’s not triggered by an explicit user action to issue the lookup, but arguably it’s more discoverable for users.

Feel free to use the userlookup jQuery plugin in your projects, it’s released as part of the MIT license on the example project.

Add role-based authorisation based on Azure AD group membership

This post describes how to use Azure AD groups for role-based authorisation in your ASP.NET web application.

Practical Microsoft Azure Active Directory Blog Series

This post is part of the Practical Microsoft Azure Active Directory Blog Series.

Add role-based authorisation based on Azure AD group membership

These instructions will help you easily add role-based authorisation based on Azure AD group membership to your existing ASP.NET application with Azure AD authentication.  The links show either a commit from the example project or to relevant documentation.

Note: Ignore the ...‘s and replace the {SPECIFIED_VALUES} with the correct values.

  1. Create groups in your Azure AD tenant
  2. Assign your users to relevant groups
  3. Configure your Azure AD application to have application permissions to read directory data from Azure Active Directory
    • If you get a “Insufficient privileges to complete the operation.” exception then you might need to wait for a few minutes or an hour since it seems to cache the old permissions
  4. In the Configure tab of your Azure AD application create a key in the keys section and copy it
  5. Configure the client id of your Azure AD application and the key you created in the last step in your web.config file
      <appSettings>
        ...
        <add key="ida:ClientId" value="{AZURE_AD_APP_CLIENT_ID}" />
        <add key="ida:Password" value="{AZURE_AD_APP_KEY}" />
      </appSettings>
    
  6. Install-Package Microsoft.Azure.ActiveDirectory.GraphClient -Version 1.0.3
  7. Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
  8. Create an AzureADGraphConnection class:
    Infrastructure\Auth\AzureADGraphConnection.cs
    
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Security.Claims;
    using Microsoft.Azure.ActiveDirectory.GraphClient;
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    
    namespace {YOUR_NAMESPACE}.Infrastructure.Auth
    {
        public interface IAzureADGraphConnection
        {
            IList<string> GetRolesForUser(ClaimsPrincipal userPrincipal);
        }
    
        public class AzureADGraphConnection : IAzureADGraphConnection
        {
            const string Resource = "https://graph.windows.net";
            public readonly Guid ClientRequestId = Guid.NewGuid();
            private readonly GraphConnection _graphConnection;
    
            public AzureADGraphConnection(string tenantName, string clientId, string clientSecret)
            {
                var authenticationContext = new AuthenticationContext("https://login.windows.net/" + tenantName, false);
                var clientCred = new ClientCredential(clientId, clientSecret);
                var token = authenticationContext.AcquireToken(Resource, clientCred).AccessToken;
    
                _graphConnection = new GraphConnection(token, ClientRequestId);
            }
    
            public IList<string> GetRolesForUser(ClaimsPrincipal userPrincipal)
            {
                return _graphConnection.GetMemberGroups(new User(userPrincipal.Identity.Name), true)
                    .Select(groupId => _graphConnection.Get<Group>(groupId))
                    .Where(g => g != null)
                    .Select(g => g.DisplayName)
                    .ToList();
            }
        }
    }
    
  9. Create an AzureADGraphClaimsAuthenticationManager class:
    Infrastructure\Auth\AzureADGraphClaimsAuthenticationManager.cs
    
    using System.Configuration;
    using System.Security.Claims;
    
    namespace AzureAdMvcExample.Infrastructure.Auth
    {
        public class AzureADGraphClaimsAuthenticationManager : ClaimsAuthenticationManager
        {
            public override ClaimsPrincipal Authenticate(string resourceName, ClaimsPrincipal incomingPrincipal)
            {
                if (incomingPrincipal == null || !incomingPrincipal.Identity.IsAuthenticated)
                    return incomingPrincipal;
    
                // Ideally this should be the code below so the connection is resolved from a DI container, but for simplicity of the demo I'll leave it as a new statement
                //var graphConnection = DependencyResolver.Current.GetService<IAzureADGraphConnection>();
                var graphConnection = new AzureADGraphConnection(
                    ConfigurationManager.AppSettings["AzureADTenant"],
                    ConfigurationManager.AppSettings["ida:ClientId"],
                    ConfigurationManager.AppSettings["ida:Password"]);
    
                var roles = graphConnection.GetRolesForUser(incomingPrincipal);
                foreach (var r in roles)
                    ((ClaimsIdentity)incomingPrincipal.Identity).AddClaim(
                        new Claim(ClaimTypes.Role, r, ClaimValueTypes.String, "GRAPH"));
                return incomingPrincipal;
            }
        }
    }
    
  10. Configure your application to use the AzureADGraphClaimsAuthenticationManager class for processing claims-based authentication in your web.config file:
      <system.identityModel>
        <identityConfiguration>
          <claimsAuthenticationManager type="{YOUR_NAMESPACE}.Infrastructure.Auth.AzureADGraphClaimsAuthenticationManager, {YOUR_ASSEMBLY_NAME}" />
          ...
        </identityConfiguration>
      </system.identityModel>
    
  11. Add [Authorize(Roles = "{AZURE_AD_GROUP_NAME}")] to any controller or action you want to restrict by role and call User.IsInRole("{AZURE_AD_GROUP_NAME}") to check if a user is a member of a particular group

Explaining the code

Microsoft.Azure.ActiveDirectory.GraphClient and AzureADGraphConnection

The ActiveDirectory.GraphClient provides a wrapper over the Azure AD Graph API, which allows you to query the users, groups, etc.

The AzureADGraphConnection class constructs a graph client connection and a method to take a user and return a list of the groups that user is a member of.

This is needed because the claims that the Azure AD token comes with by default do not include any roles.

AzureADGraphClaimsAuthenticationManager

This class provides a claims authentication manager that hooks into the point that authentication occurs and augments the Claims Principal that is generated by default by getting the Azure AD Groups that the user is a member of (via AzureADGraphConnection) and turning them into a ClaimTypes.Role claim. ClaimTypes.Role is the claim type that automatically hooks into ASP.NETs roles processing.

The web.config change is how you override the Claims Authentication Manager.

Using an enum for roles

To avoid the use of magic strings in your application and assuming the group names in AD are relatively stable you can encapsulate them in an enum. There is a corresponding commit in the example project that demonstrates how to do it.

This involves three main steps:

  1. Define an enum with your roles and using the [Description] attribute to tag each role with the Display Name of the equivalent Azure AD group
  2. Parse the group name into the corresponding enum value by using Humanizer.Dehumanize in AzureADGraphConnection
  3. Create an AuthorizeRolesAttribute that extends AuthorizeAttribute and an extension on IClaimsPrincipal that provides an IsInRole method that both take the enum you defined rather than magic strings to define the roles

Explaining the code behind authenticating MVC5 app with Azure AD

This post explains the code outlined in the last post on installing Azure AD authentication to an existing (or new) ASP.NET MVC 5 (or 3 or 4) application.

Practical Microsoft Azure Active Directory Blog Series

This post is part of the Practical Microsoft Azure Active Directory Blog Series.

Microsoft.Owin.Security.ActiveDirectory

The Microsoft.Owin.Security.ActiveDirectory package is part of the Katana project, which produces a bunch of libraries that build on top of Owin.

It allows your application to accept a Bearer Authorization header in the HTTP request that contains a JSON Web Token (JWT) token issued from Azure AD and will create a ClaimsPrincipal in the thread from that token. This is mainly useful for creating Web APIs and thus is optional if you just need web authentication.

Note: if you use bearer tokens make sure you request resources with HTTPS.

This package is enabled up by the app.UseWindowsAzureActiveDirectoryBearerAuthentication(...) call in Startup.cs.

There are two configurations in the Startup.cs code to configure the package:

  • TokenValidationParameters – this controls how tokens that are presented by a user are checked for validity
    • In the code example in the previous blog post we set ValidAudience, which ensures that any tokens presented are valid for the given audience (alternatively, you can use ValidAudiences if you want to accept tokens from multiple audiences)
    • There is more information later in this post about audiences
  • Tenant – This sets which Azure AD tenant you are accepting tokens from

WSFederationAuthenticationModule (WS-FAM) and SessionAuthenticationModule (SAM)

These modules are part of WIF via System.IdentityModel.Services and are the mechanism by which the authentication hooks into ASP.NET and works. For this to work you need to enter a bunch of code in web.config, but Microsoft is currently working on OWIN-only components that hide all of that away and provide for a much simpler configuration so in the future you won’t need to do any of this. At the time of writing the samples don’t quite seem to work (for me at least) yet, so for now it makes sense to keep using the WIF modules, but it’s worth keeping out an eye on what happens with the samples Microsoft are working on, in particular the OpenIdConnect one.

So what do these WIF modules do? From the WSFederationAuthenticationModule documentation:

When an unauthenticated user tries to access a protected resource, the [Relying Party (RP)] returns a “401 authorization denied” HTTP response. The WS-FAM intercepts this response instead of allowing the client to receive it, then it redirects the user to the specified [Security Token Service (STS)]. The STS issues a security token, which the WS-FAM again intercepts. The WS-FAM uses the token to create an instance of ClaimsPrincipal for the authenticated user, which enables regular .NET Framework authorization mechanisms to function.

Because HTTP is stateless, we need a way to avoid repeating this whole process every time that the user tries to access another protected resource. This is where the SessionAuthenticationModule comes in. When the STS issues a security token for the user, SessionAuthenticationModule also creates a session security token for the user and puts it in a cookie. On subsequent requests, the SessionAuthenticationModule intercepts this cookie and uses it to reconstruct the user’s ClaimsPrincipal.

From the SessionAuthenticationModule documentation:

The SAM adds its OnAuthenticateRequest event handler to the HttpApplication.AuthenticateRequest event in the ASP.NET pipeline. This handler intercepts sign-in requests, and, if there is a session cookie, deserializes it into a session token, and sets the Thread.CurrentPrincipal and HttpContext.User properties to the claims principal contained in the session token.

These modules are then configured by the system.identityModel and system.identityModel.services sections in web.config.

issuerNameRegistry and signing key refresh

This configures which tenants and issuers of authentication your application trusts as well as the thumbprint of their public signing key certificates.

The certificate thumbprints will change over time for security reasons so hardcoding the keys in web.config is not a good option hence you need to make sure to implement code that changes the keys for you. The simplest, built-in way to do that is using ValidatingIssuerNameRegistry.WriteToConfig, which updates web.config for you automatically when it changes. That’s the instruction that was given in the last blog post.

Another option is to store the keys in a database, which is what the default code that Visual Studio’s Identity and Access Tools add (using EntityFramework). Yet another option is to store them in-memory and there is a class floating about that you can use to do that. Storing it in-memory is probably the nicest option.

Audiences and realm

The audienceUris configuration in web.config allows your application to control a list of identifiers that will be accepted for the scope of a presented authentication token (in Azure AD this maps to the App ID URI of your Azure AD application).

The realm attribute in the wsFederation element of the federationConfiguration in web.config tells the WSFederationAuthenticationModule to specify the WS Federation wtrealm to use for the request, which identifies the security realm that will be used for the request. For Azure AD this provides the App ID URI of the Azure AD application that should be used to service the authentication request. Thus, this value will generally be the same as the audience you configured unless you are doing multi-tenancy.

The difference between realm and audience is explained in this StackOverflow post.

securityTokenHandlers

A security token handler provides a way for interpreting a security token of some sort into a Claims Principal for a given request. The code in the previous post gets you to remove the default handler that takes care of looking at cookies in the request, SessionSecurityTokenHandler, because it uses DPAPI to encrypt and decrypt the cookies and that doesn’t work in an environment with multiple web servers. Instead you are guided to add a MachineKeySessionSecurityTokenHandler, which uses the machine key to encrypt and decrypt the cookie.

The configured securityTokenHandler will be what the SessionAuthenticationModule will make use of to store and retrieve the token from and to the cookie.

certificateValidation

On first thought, it might be confusing to see that certificate validation is turned off, but this is by design. The validating issuer name registry as explained above is a replacement for standard certificate validation. The only information that I’ve been able to find that explains this further is a post on the MSDN forums.

To illustrate what happens when you try using certificate validation, you can change certificateValidationMode to, say, ChainTrust and then you will get the following error:

The X.509 certificate CN=accounts.accesscontrol.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider.

HTTPS

You can ensure that the security cookie is set to require SSL with the requireSSL attribute of the cookieHandler element in web.config and ensure that the authentication requests require a HTTPS connection with the requireHttps attribute of the wsFederation element in web.config.

In production environments it’s absolutely essential that you set both to true otherwise you are vulnerable to MITM attacks. You can set them to true locally if you use https with IIS Express or via a self-signed cert with IIS.

issuer

Setting this attribute on the wsFederation element in web.config determines where sign-in and sign-out requests are redirected.

passiveRedirectEnabled

Setting this attribute on the wsFederation element in web.config to true allows the WSFederationAuthenticationModule to automatically redirect the user to the authentication server in the event of a 401. Without this set to true you would need to explicitly call SignIn to log the user in.

reply

Setting this attribute on the wsFederation element in web.config allows the application to control where the user is taken after they authenticate. The last post didn’t tell you to set it because by default it’s not required. When using Azure AD, in the instance that it’s not specified, the user will be redirected to the first Reply URL specified in the Azure AD application.

This requires that you can only have a one-to-one relationship between an Azure AD application and a web application requiring authentication. You can actually add multiple reply URLs to your Azure AD application though. This fact in combination with the reply attribute means that you can support multiple web applications (e.g. local, dev, staging, prod or even just different applications altogether) with the same Azure AD application. You just need to config transform your web.config file for each different environment as explained in this post.

If you are in a situation where you want to only change an app setting to control the reply URL (e.g. you are using VSO to deploy different branches to separate Azure Web Sites) then you can change the reply URL in code like so:

    public static class IdentityConfig
    {
        public static void ConfigureIdentity()
        {
            ...
            FederatedAuthentication.FederationConfigurationCreated += FederatedAuthentication_FederationConfigurationCreated;
        }

        ...

        private static void FederatedAuthentication_FederationConfigurationCreated(object sender, FederationConfigurationCreatedEventArgs e)
        {
            var federationConfiguration = new FederationConfiguration();
            federationConfiguration.WsFederationConfiguration.Reply =
                ConfigurationManager.AppSettings["AuthenticationReplyUrl"];
            e.FederationConfiguration = federationConfiguration;
        }
    }

Anti-forgery config

AntiForgeryConfig.UniqueClaimTypeIdentifier allows you to set which claim from the claims-based authentication token can be used to uniquely identify a user for the purposes of creating an appropriate anti-forgery token. Side note: this post about anti-forgery token is great. The claim that was shown in the last post is the correct one to use for Azure AD.

Logout

There are two parts to the logout code in the last post. Firstly, there is a call to the SessionAuthenticationModule, which will cause the cookie you have on the current site to be dropped by your browser. Secondly, there is a redirect to a URL that the WS Federation code generates that will log you out of the source system (in this case your Azure AD session) and then redirect you back to a callback page with [AllowAnonymous] so they don’t get redirected back to login again straight away.

Announcing AspNet.Mvc.Grid

Whenever I need to display tables of data in an ASP.NET MVC application I end up pulling in the MVCContrib library to use its Grid Helper.

The Grid helper is really cool, it allows you to take a class like:

public class Person
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }
    public DateTime DateOfBirth { get; set; }
}

And create a Razor view like the following (including appending some Bootstrap classes):

@model IEnumerable<Person>

<h1>Person List</h1>

@Html.Grid(Model).Columns(cb =>
{
    cb.For(p => Html.ActionLink(p.Name, "Detail", "People", new {p.Id}, null)).Named("Name");
    cb.For(p => p.Email);
    cb.For(p => p.DateOfBirth);

}).Attributes(@class => "table table-striped table-hover table-condensed").HeaderRowAttributes(new Dictionary<string, object> { { "class", "active" } })

This will then output HTML like the following:

<h1>Person List</h1>
<table class="table table-striped table-hover table-condensed">
   <thead>
      <tr class="active">
         <th>Name</th>
         <th>Email</th>
         <th>Date Of Birth</th>
      </tr>
   </thead>
   <tbody>
      <tr class="gridrow">
         <td><a href="/People/Detail/1">Name1</a></td>
         <td>Email1</td>
         <td>19/08/2014 12:00:00 AM</td>
      </tr>
      <tr class="gridrow_alternate">
         <td><a href="/People/Detail/2">Name2</a></td>
         <td>Email2</td>
         <td>20/08/2014 12:00:00 AM</td>
      </tr>
      <tr class="gridrow">
         <td><a href="/People/Detail/3">Name3</a></td>
         <td>Email3</td>
         <td>21/08/2014 12:00:00 AM</td>
      </tr>
   </tbody>
</table>

This means you don’t have to render out the nasty, tedious table HTML, it takes care of creating the thead and tbody for you, it’s type safe (thanks Razor!) and there are some sensible defaults that save you time (like inferring the th title from the property name).

The problem with MvcContrib

There is however a few big problems with MvcContrib – it’s not really kept very maintained and it contains A LOT of bloat in there for stuff you will never need (and frankly shouldn’t use).

To be honest, out of everything in there the Grid is the only thing I would touch.

It does actually have an MVC5 package, but it contains a reference to Mvc4Futures and this can actually have a really bad impact if you are using MVC5 due to one of the breaking changes in MVC5. If you have code that is assembly scanning the current AppDomain for instance then you will soon come across this error:

Inheritance security rules violated while overriding member: ‘Microsoft.Web.Mvc.CreditCardAttribute.GetClientValidationRules(System.Web.Mvc.ModelMetadata, System.Web.Mvc.ControllerContext)’. Security accessibility of the overriding method must match the security accessibility of the method being overriden.

Creating AspNet.Mvc.Grid

Given that roadblock on a current project, and given I don’t really want to pull in all the bloat of MvcContrib I decided to pull out the Grid code from MvcContrib and put it into it’s own library that targets .NET 4.5 and MVC 5. This is allowed under the Apache 2.0 license the MvcContrib code is licensed for.

Hence, I’d like to announce the AspNet.Mvc.Grid library! It has been published to NuGet as per usual.

The only difference you will notice between it and MvcContrib is that the namespaces are different. This was a conscious decision to make the library less confusing for completely new users.

Enjoy!

Deployment and Development using PhoneGap Build for a Cordova/PhoneGap app

I recently worked on a project to build a mobile app in iOS and Android. The team decided to use Cordova and Ionic Framework to build it. I highly recommend Ionic – it’s an amazingly productive and feature-rich framework with a really responsive development team! It enabled us to use AngularJS (and the application structure and maintenance advantages that affords) and really sped up the development time in a number of ways. We were also able to get an almost-native performance experience for the application using features like collection repeat (virtualised list) as well as the in-built performance tuning the Ionic team have performed.

When it was time to set up an automated build pipeline we settled on using PhoneGap Build and its API which we then called from OctopusDeploy as part of a NuGet package we created.

PhoneGap Build is Adobe’s cloud build service that can build iOS, Android, Windows Phone etc. apps for you based on a zip file with your Cordova/PhoneGap files.

We decided on PhoneGap Build because:

  • It has been around a while (i.e. it’s relatively mature)
  • It’s backed by a big name (Adobe)
  • It has a decent API
  • It meant we didn’t have to set up a Mac server, which would have been a pain
  • It’s free for a single project :)

This post covers how we set up our automated deployment pipeline for our native apps using OctopusDeploy and PhoneGap Build. I have also published a companion post that covers everything you need to know about PhoneGap Build (read: what I wish I knew before I started using PhoneGap Build because it’s darn confusing!).

Deployment pipeline

This is the deployment pipeline we ended up with:

  1. Developer commits a change, submits a pull request and that pull request gets merged
  2. Our TeamCity continuous integration server compiles the server-side code, runs server-side code tests and runs client-side (app) tests
  3. If all is well then a NuGet package is created for the server-side code and a NuGet package is created for the app – both are pushed to OctopusDeploy and the server-side code is deployed to a staging environment (including a web server version of the app)
    • Native functionality wouldn’t work, but you could access and use the bulk of the app to test it out
    • To test native functionality developers could easily manually provision a staging version of the app if they plugged a phone into their computer via a USB cable and executed a particular script
  4. The product owner logs into OctopusDeploy and promotes the app to production from staging whereupon OctopusDeploy would:
    • Deploy the server-side code to the production environment
    • Use the PhoneGap Build API to build an iOS version with an over-the-air compatible provisioning profile
    • Download the over-the-air .ipa
      • Upload this .ipa to Azure blob storage
      • Beta testers could then download the latest version by visiting a html page linking to the package (see below for details)
    • Build the production Android and iOS versions of the app using the PhoneGap Build API
  5. The product owner then logs on to the PhoneGap Build site, downloads the .ipa / .apk files and uploads them to the respective app stores

End result

I personally wasn’t completely satisfied with this because I didn’t like the manual step at the end, however, apparently app store deployments are stuck in the 90s so unless you want to screen-scrape the respective app stores you have to manually log in and upload the deployment file. Regardless, the end result was good:

  • Production deployments were the same binaries/content that were tested on the CI server and that had been successfully deployed and manually tested in the staging environment
  • We could go from developer commit to production in a few minutes (ignoring app store approval/roll-out times; remember – stuck in the 90s)
  • The product owner had complete control to deploy any version of the app he wanted to and it was easy for him to do so

How we did it

I have published the NodeJS script we used to automate the PhoneGap Build API to a Gist. It contains a very reusable wrapper around a PhoneGap Build API NodeJS library that makes it dead easy to write your own workflow in a very succinct manner.

To generate the zip file the NodeJS script used we simply grabbed the www directory (including config.xml) from our repository and zipped it up. That zip file and the NodeJS scripts went into the NuGet package that OctopusDeploy deployed when the app was promoted to production (we made this happen on a Tentacle residing on the OctopusDeploy server). I have published how we generated that NuGet package to a Gist as well (including how to set up over-the-air).

Further notes

Some notes about our approach:

  • Because we were able to make our backend apis deploy to production at the same time as generating our production app packages we were assured that the correct version of the backend was always available when the apps were deployed to the respective app stores.
  • It was important for us to make sure that our backend api didn’t have any breaking changes otherwise existing app deployments (and any lingering ones where people haven’t updated) would break – usual API management/versioning practices apply.
  • Whenever we wanted to add new Beta testers the provisioning profile for over-the-air needed to be re-uploaded to PhoneGap Build
    • Unfortunately, PhoneGap Build requires you upload this alongside the private key – thus requiring someone with access to PhoneGap Build, the private key and the iOS developer account – it’s actually quite a convoluted process so I’ve also published the instructions we provided to the person responsible for managing the Beta testing group to a Gist to illustrate
  • We stored the passwords for the Android Keystore and iOS private keys as sensitive variables in OctopusDeploy so nobody needed to ever see them after they had been generated and stored in a password safe
    • This meant the only secret that was ever exposed on a regular basis was the password-protected over-the-air private key whenever a Beta tester was added

TestFlight

While we didn’t use it, it’s worth pointing out TestFlight. It’s a free service that gives you the ability to more easily keep track of a list of testers, their UUIDs, informing them of updates and automatically creating and hosting the necessary files for over-the-air. It contains an API that allows you to upload your .ipa to and it should be simple enough to take the scripts I published above and upload a .ipa to TestFlight rather than Azure Blob Storage if you are interested in doing that.

Final Thoughts

PhoneGap Build is a useful and powerful service. As outlined in my companion post, it’s very different to Cordova/PhoneGap locally and this, combined with confusing documentation, resulted in me growing a strong dislike of PhoneGap Build while using it.

In writing my companion post however, I’ve been forced to understand it a lot more and I now think that as long as you have weighed up the pros and cons and still feel it’s the right fit then it’s a useful service and shouldn’t be discounted outright.

Blog about software engineering, web development, agile, C#, ASP.NET and Windows Azure.