TeamCity deployment pipeline (part 3: using OctopusDeploy for deployments)

This post outlines how using OctopusDeploy for deployments can fit into a TeamCity continuous delivery deployment pipeline.

Maintainable, large-scale continuous delivery with TeamCity series

This post is part of a blog series jointly written by myself and Matt Davies called Maintainable, large-scale continuous delivery with TeamCity:

  1. Intro
  2. TeamCity deployment pipeline
  3. Deploying Web Applications
    • MsDeploy (onprem and Azure Web Sites)
    • OctopusDeploy (nuget)
    • Git push (Windows Azure Web Sites)
  4. Deploying Windows Services
    • MsDeploy
    • OctopusDeploy
    • Git push (Windows Azure Web Sites Web Jobs)
  5. Deploying Windows Azure Cloud Services
    • OctopusDeploy
    • PowerShell
  6. How to choose your deployment technology

Using another tool for deployments

If you can have a single tool to create your deployment pipeline and include continuous integration as well as deployments then there are obvious advantages in terms of simplicity of configuration and management (since set of project definitions, user accounts and permissions, single UI to learn, etc.). This is one of the reasons we created this blog series; we loved how powerful TeamCity is out of the box and wanted to expose that awesomeness so other people could experience what we were.

Lately, we have also been experiementing with combining TeamCity with another tool to take care of the deployments; OctopusDeploy. There are a number of reasons we have been looking at it:

  • Curiosity; OctopusDeploy is getting a lot of attention in the .NET community so it’s hard not to notice it – it’s worth looking at it to see what it does
  • If you are coordinating complex deployments then OctopusDeploy takes care of managing that complexity and the specification of the deployment process in a manageable way (that would start to get complex with TeamCity)
    • For instance, if you need to perform database migrations then deploy multiple websites and a background worker together then OctopusDeploy makes this easy
  • Visualising the deployment pipeline is much easier in OctopusDeploy, which is great when you are trying to get non-technical product owners involved in deployments
  • It gives you a lot more flexibility around performing deploy-time actions out-of-the-box and makes it easier to do build once packages
    • It is possible to do the same things with MsDeploy, but it is more complex to do
  • It has great documentation
  • It comes with plugins that make using it with TeamCity a breeze

We wouldn’t always use OctopusDeploy exclusively, but it’s definitely a tool worth having a good understanding of to use judiciously because it can make your life a lot easier.

Generating the package

OctopusDeploy uses the NuGet package format to wrap up deployments; there are a number of ways you can generate the NuGet package in TeamCity:

  • OctoPack (if you install OctoPack into your project then it’s dead easy to get the NuGet package – you can install the plugin to invoke it or pass through /p:RunOctoPack=true to MSBuild when you build your solution/project if using .NET)
  • TeamCity NuGet Pack step (if you are using a custom .nuspec file then it’s easy to package that up and automatically get it as an artifact by using TeamCity’s NuGet Pack step)
  • PowerShell or another scripting language (if you need more flexibility you can create a custom script to run that will package up the files)

Publishing the package to a NuGet feed so OctopusDeploy can access it

In order for OctopusDeploy to access the deployment packages they need to be published to a NuGet feed that the OctopusDeploy server can access. You have a number of options of how to do this from TeamCity:

  • Publish the package to TeamCity’s NuGet feed (this is easy – simply include the .nupkg file as an artifact and it will automatically publish to it’s feed)
  • Publish the package to OctopusDeploy’s internal NuGet server
  • Publish the package to some other NuGet server you set up that OctopusDeploy can access

Automating releases and deployments

In order to create releases and trigger releases to those deployments from TeamCity there are a number of options:

Introduction to Windows/Microsoft Azure

I often have people ask me how they can get started with Windows Microsoft Azure (that one’s gonna take a while for me). Generally I start by telling them to grab a free trial and start off by publishing some web apps using Azure Web Sites. It’s great to start there because it’s so easy to get up and running and it gives you a nice introduction to the Azure portal and some of the awesome diagnostics and configuration features that appear for a lot of the Azure services.

Recently, I did a few hours of presenting and lab facilitation at the Perth event for Global Windows Azure Bootcamp with Matt Davies. In preparation for that event we prepared some content (slides, live demos and hands-on labs) on our open source organisation (MRCollective). If you are learning Azure then I encourage you to check it out, since I think it gives a nice overview of the main features of Azure you are likely to start out using as well as some examples of how to use them and get started with them.

It’s a tiny bit out of date after the huge amount of announcements that were made about Azure after that event, but it’s still 95% right and all the hands-on labs and demos should still work.

GitHubFlowVersion TeamCity MetaRunner

The other day I created a TeamCity Meta Runner that allows you to run GitHubFlowVersion against an MSBuild file. I thought I’d share it because it’s cool and demonstrates how powerful this new TeamCity feature is.

If you want to see more Meta Runners check out the GitHub repository that Jetbrains has created.

Disclaimer: this runner doesn’t expose all of the options of GitHubFlowVersion.


<?xml version="1.0" encoding="UTF-8"?>
<meta-runner name="Run Solution using GitHubVersion">
  <description>Run Solution using GitHubVersion</description>
  <settings>
    <parameters>
      <param name="mr.GitHubFlowVersion.SolutionFile" value="" spec="text description='The .sln file relative to the working directory (or any MSBuild file).' display='normal' label='Solution (.sln):' validationMode='notempty'" />
    </parameters>
    <build-runners>
      <runner id="RUNNER_14" name="" type="Ant">
        <parameters>
          <param name="build-file">
          <![CDATA[
            <project name="MetaRunner">
              <property name="mr.GitHubFlowVersion.Version" value="1.3.2" />
              <target name="downloadGitHubFlowVersion">
                <echo>Downloading and extracting latest GitHubFlowVersion.exe...</echo>
                <exec executable="%teamcity.tool.NuGet.CommandLine.DEFAULT.nupkg%\tools\NuGet.exe" dir="${teamcity.build.tempDir}" failonerror="true">
                  <arg line="install GitHubFlowVersion -Version ${mr.GitHubFlowVersion.Version} -OutputDirectory &quot;${teamcity.build.tempDir}\packages&quot;"/>
                </exec>
                <echo>Downloaded and extracted latest GitHubFlowVersion.exe.</echo>
              </target>
              <target name="run" depends="downloadGitHubFlowVersion">
                <echo>Running GitHubFlowVersion.exe...</echo>
                <exec executable="${teamcity.build.tempDir}\packages\GitHubFlowVersion.${mr.GitHubFlowVersion.Version}\tools\GitHubFlowVersion.exe" dir="${teamcity.build.workingDir}" failonerror="true">
			<arg line="/WorkingDirectory &quot;${teamcity.build.workingDir}&quot; /UpdateAssemblyInfo /ProjectFile=&quot;${mr.GitHubFlowVersion.SolutionFile}&quot;"/>
                </exec>
                <echo>Finished running GitHubFlowVersion.exe.</echo>
              </target>
            </project>
          ]]>
          </param>
          <param name="build-file-path" value="build.xml" />
          <param name="target" value="run" />
          <param name="teamcity.coverage.emma.include.source" value="true" />
          <param name="teamcity.coverage.emma.instr.parameters" value="-ix -*Test*" />
          <param name="teamcity.coverage.idea.includePatterns" value="*" />
          <param name="teamcity.step.mode" value="default" />
          <param name="use-custom-build-file" value="true" />
        </parameters>
      </runner>
    </build-runners>
    <requirements />
  </settings>
</meta-runner>

WordPress 2014 Theme Full Width

I recently decided to upgrade my theme from WordPress’ 2011 Theme to the new 2014 one. I think the typography of the 2014 one is absolutely beautiful. The problem I had with the built-in theme though is that it restricts the content to a 474px width, which might be fine for text-only posts, but when you have a lot of images and in particular source code snippets then it’s simply too squishy.

What I ended up doing to fix the problem was installing the Simple Custom CSS plugin and then adding the following CSS:

.site-content .entry-header,
.site-content .entry-content,
.site-content .entry-summary,
.site-content .entry-meta,
.page-content,
.post-navigation,
.image-navigation,
.comments-area {
	max-width: 900px!important;
}

.site-header,
.site {
    max-width: none!important;
}

Making Intent Clear / Derived Values [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Making Intent Clear

I think one of the most important things when writing tests (apart from consistency) is that they are clear in intent. If you buy into the notion that tests form part of the documentation of your system then it’s really important, like all good documentation, that the tests are both readable and understandable.

I think there are a number of techniques that can help with this in various situations and there are three in particular that I will be covering in this sub-section of the blog series. I have already covered test naming and I think that has a big impact on clarity of intent.

Derived Values

There are a number of excellent blog posts by Mark Seemann (@ploeh) in his zero-friction TDD series that I have found useful in my ongoing research and one in particular that really resonated with me was the concept of derived values.

Consider the following code:

public static class StringExtensions
{
    public static string ReverseString(this string str)
    {
        return string.Join("", str.Reverse().ToArray());
    }
}

public class NaiveTest
{
    [Test]
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned()
    {
        const string str = "a string";
        var result = str.ReverseString();
        Assert.That(result, Is.EqualTo("gnirts a"));
    }
}

public class DerivedValueTest
{
    [Test]
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned()
    {
        const string str = "a string";
        var expectedResult = str.Reverse().ToArray();

        var result = str.ReverseString();

        Assert.That(result, Is.EqualTo(expectedResult));
    }
}

public class DataDrivenTest
{
    [Test]
    [TestCase("", "")]
    [TestCase("a", "a")]
    [TestCase("ab", "ba")]
    [TestCase("longer", "regnol")]
    [TestCase("a string with space", "ecaps htiw gnirts a")]
    [TestCase("num3rics&punctua10n!@$", "$@!n01autcnup&scir3mun")]
    public void GivenAString_WhenInverting_ThenReversedStringWillBeReturned(string input, string expectedResult)
    {
        var result = input.ReverseString();

        Assert.That(result, Is.EqualTo(expectedResult));
    }
}

This is a fairly contrived example, but it helps illustrate a few things:

  • The NaiveTest is hard to infer understanding at a glance – you can eventually reason an understanding about the relationship between the input and the output because of the name of the test in combination with common sense, but it’s not easy and thus I think it’s not a great test (it’s still clear AAA so it’s certainly not awful).
  • The DerivedValueTest is what Mark was describing – this is much better because the relationship between the input and result is very clear in the first two lines of the test and you immediately know a) what is being tested and b) how it should work.
    • Of note is that the implementation is the same as the real implementation – this could be a problem if the developer decides to simply copy the implementation into the test or vice versa
      • Interestingly, by writing the test using proper TDD it wouldn’t matter as much that the implementation was similar to the implementation because in writing the test you would see it fail in the “Red” step and at that time verify that the string being asserted in the test output was in fact the correct reverse string
      • The fact you are then relying on the developer verifying that the result being asserted was correct at the time of writing the test reminds me somewhat of the notion of approval tests (which I find myself using a fair bit to perform complex assertions that can’t easily be expressed in code, but can be easily reasoned “by eye”)
      • It occurs to me that if you were only testing a subset of some complex functionality that you would only need to include a subset of the implementation for the test
      • If you have a team that isn’t disciplined in writing their tests in a TDD fashion (or at least verifying the test is definitely correct) then this approach might make it easier to introduce incorrect tests that are a copy of the implementation and don’t actually test anything (hopefully code review would pick this up though)
    • Where possible you could try and include an alternate implementation of the code under test in your test (with a focus on the implementation being readable and understandable), but even in this case I still think the “Red” step mentioned above is important to make sure you didn’t have an error in your alternative implementation
  • The DataDrivenTest in my opinion is the better test in this case, not just because it provides better code coverage by trying multiple values (since this could easily have been done for the derived value test as well), but also because:
    • The relationship between input and output is made clear by their proximity and the fact that there are simple examples as well as more complex ones (the simple ones help the viewer immediately grok the relationship)
      • I feel that the “proximity” part is the most important bit here (assuming that you can grok the relationship)
      • I think the proximity in the DerivedValueTest is an important factor as well to help with immediate understanding
    • I suspect the edge cases in the above example could go into their own test so that the test name can more clearly reflect the edge case being tested
    • This approach won’t work for all situations – sometimes the logic being tested is complex enough that having the input and expected result side-by-side still won’t allow the reader to glean understanding about the relationship and it’s important to how show the expected result is derived
    • Be pragmatic – use the right approach in the right situation – derived values is sometimes useful and sometimes showing a series of {input -> expected result} is clearer – I’d say the main thing to be wary of is tests that simply have a value in the final assert and it’s not clear how that value was derived

There is a slight variation to the DataDrivenTest above that I sometimes come across that is also worth mentioning – complex example generation. It’s a strategy to avoid the situation described above where showing the derived value involves duplicating the implementation logic in situations where it’s really complex to work out that logic, but easy(ier) to come up with an example of the logic in action. I often find myself this technique for date logic – writing the date logic as part of the test never gives me a lot of confidence since it’s so darn complex to figure out (I hate programming dates/time logic). In these situations I like to pull up a calendar and pick some candidate examples for the logic I’m trying to implement.

A couple of examples are shown below, pulled from a codebase I work on (with some tweaks to generalise the second test so it’s non-identifying):

    // From http://www.timestampgenerator.com/1352031606/#result
    [TestCase("2012-11-04 12:20:06", 1352031606)]
    [TestCase("2012-11-03 23:59:59", 1351987199)]
    [TestCase("2012-02-29 13:00:01", 1330520401)]
    public void GivenDate_WhenConvertingToUnixTimestamp_ItShouldBeCorrect(string inputDate, int expectedTimestamp)
    {
        var date = DateTime.Parse(inputDate);

        var timestamp = date.ToUnixTimestamp();

        Assert.That(timestamp, Is.EqualTo(expectedTimestamp));
    }

    /* unix $ cal 8 2004
     *      August 2004
       Su Mo Tu We Th Fr Sa
        1  2  3  4  5  6  7
        8  9 10 11 12 13 14
       15 16 17 18 19 20 21
       22 23 24 25 26 27 28
       29 30 31
     */
    [Test]
    // Day before date during weekend
    [TestCase("2004-08-09", "2004-08-06", true)]
    // Day before date during week
    [TestCase("2004-08-10", "2004-08-09", true)]
    // Consecutive business days
    [TestCase("2004-08-09", "2004-08-09", true)]
    [TestCase("2004-08-09", "2004-08-10", true)]
    [TestCase("2004-08-09", "2004-08-11", false)]
    [TestCase("2004-08-09", "2004-08-12", false)]
    // Include Weekend
    [TestCase("2004-08-12", "2004-08-13", true)]
    [TestCase("2004-08-12", "2004-08-14", true)]
    [TestCase("2004-08-12", "2004-08-15", true)]
    [TestCase("2004-08-12", "2004-08-16", false)]
    // Start on Weekend
    [TestCase("2004-08-13", "2004-08-16", true)]
    [TestCase("2004-08-13", "2004-08-17", false)]
    public void WhenValidatingConnectionDate_ThenThereShouldBeAnErrorOnlyIfTheDateIsLessThan2BusinessDaysAway(string now, string date, bool expectError)
    {
        _model.ConnectionDate = DateTime.Parse(date);
        var dateTimeProvider = DateTimeProviderFactory.Create(DateTime.Parse(now));
        var modelState = new ModelStateDictionary();

        _model.Validate(modelState, dateTimeProvider);

        if (expectError)
            Assert.That(modelState[modelStateKey].Errors, Has.Count.GreaterThan(0));
        else
            Assert.That(modelState.ContainsKey(modelStateKey), Is.False);
    }

Props to my colleague Toby Moore for coming up with the idea of using the Unix cal command to generate calendars for pasting in comments above the examples).

In these examples, there is a cognitive load to figure out the relationship between input and expected result, but I don’t think there is a silver bullet in these cases – the test name, multiple examples and the comments above the tests (I think) help anyone maintaining the tests to figure out what is going on. Either way there would be a cognitive load to get your head around the logic since it’s really complex and in this case it’s about trying to minimise that load.

IDDD Course notes

Last month I completed Vaughn Vernon‘s 3-day advanced IDDD Workshop. Here are some notes I’ve since written up about the main points I got out of it after re-reviewing the course material. Some notes from a colleague who attended the same course have also been published.

Getting started with DDD

  • DDD is the formation of a ubiquitous language explicitly bound by a “bounded context” – the context is important because the same word can have different meanings
  • Hexagonal architecture / ports and adapters can be used with DDD – there are adapters from the domain to a port (either input or output – could be datasources or other systems)
  • DDD should be used in situations for complex/unknown applications and problem domains that provide a competitive advantage – it is a technique to help uncover these complexities over time in a sustainable way
    • If CRUD is more appropriate – use CRUD
    • Keep in mind that investing in the code generally pays off over the long term
  • DDD can be used with legacy code, but you need to abstract and encapsulate the parts of the legacy code you aren’t modelling
  • DDD is about bridging the gap between technical people and subject matter experts so finding the subject matter experts (they aren’t necessarily your product owner and there might be multiple ones) is important – where possible interact with them directly for best effect
  • A ubiquitous language should include the terms as well as scenarios that describe the context of those “things”
  • Good general rule: Tell, don’t ask (tell objects what you want to do, don’t ask them for data)

Domains, subdomains, bounded contexts

  • Subdomains – identify the pieces of information you need for your project to be successful
    • Where possible have a one to one mapping with a bounded context unless there is a small shared kernel
  • Bounded context contains: interfaces (service or UI), app services, database schema, domain itself
  • Example: pull out user and identity into separate domain, then you can model more explicit things that might be mapped from that context e.g. Author, Collaborator (these could even just be a value object with an id that links to the other context)
  • For each domain look at what domains it (core) is linked to and identify them as: supporting or generic

Context maps / relationships between subdomains

  • It can help to draw context maps to illustrate where the subdomains, contexts and the relationships between them are
  • Ways of joining two separate (sub)domains:
    • Partnership
    • Shared kernel
    • Customer-supplier
    • Conformist
  • When communicating with another domain (via OHS, web service, messaging):
    • ACL
    • Published language
    • Event subscription

Architecture

  • Ports and adapters, {infra -> UI -> application -> domain}, CQRS/ES, event-driven are some of the architectural options
    • Ports don’t have to model the domain e.g. UserInRole REST service even though there is no UserInRole domain object)
    • Adapters can map to/from domain representation e.g. UserInRoleAdapter
    • Ports and adapters allow you to focus on the domain, delay infrastructure concerns and do separate testing
  • Domain services shouldn’t control security (where it’s a cross-cutting concern as opposed to something being modelled) or transactions

Entities / Value objects

  • Use entities to model things that you care about individuality
    • Equality via id
    • Avoid just doing sets – if you have two sets in a row then you are probably missing a behaviour – ask yourself if you missed a set would you be in an inconsistent state?
    • Ensure consistency by using invariants and non-default constructors
  • Use value objects where possible to model things that describe, measure, qualify or quantify the ubiquitous language – should model a conceptual whole
    • Generally immutable, set state via ctor – makes it easy to test and maintain
      • Modification methods should return a new instance
    • Equality by comparing properties
    • Discardable, replaceable and interchangeable
    • If you want you can use value objects to represent the id of a particular entity (e.g. wrap guid)
  • Generating identities
    • User provides
    • Application generates
    • Persistence store generates
    • Anther bounded context provides

Aggregates

  • Main purpose: determine transactional boundaries for consistency
  • Balance between having a large aggregate graph that gives navigational convenience, but can result in higher likelihood of transactional errors that don’t affect consistency / negative size and performance impacts vs small aggregates that might mean a lot of messing about in application code to wrangle multiple aggregates
  • Keep in mind eventual consistency – do multiple aggregates have to be kept consistent?
    • event processing / batch processing etc. – get business to specify the max time to be consistent
  • Where possible reference between aggregates by id – less memory, faster to load, better garbage collection
  • Can use double dispatch from an application service to pass instances between aggregates to perform actions
  • Domain events are non-async
    • This means you can respond to a domain event within the same transaction
    • You can have a listener that then publishes the events to a bus (and from there they can be handled async)
    • Who job is it to make the data consistent?
      • If the end user then use transaction consistency if another user or the system then eventual consistency should be fine

Domain Services

  • A domain service is a part of the domain model that is a non-transactional, lightweight stateless operation that doesn’t have a natural home in an entity
    • If you find yourself using static methods in your domain entities then it’s a good indicator that you need a domain service
    • They can be passed into models and used via double dispatch or the model can be passed into them

Domain Events

  • Domain events inform subscribers of the facts about past happenings in a bounded context
  • When domain experts use triggering works then it’s a good sign you might need a domain event:
    • When
    • If that happens
    • Inform me if
    • Notify me if
    • An occurrence of
  • You can persist events along with state (or just the events if using event sourcing)
  • Events can be published outside of the bounded context or processed asynchronously by forwarding them to a message exchange
    • Bus
    • Decoupled publishing e.g. atom feed
  • Event-driven modelling exercise – Trello is a good example
    • Good example to do with a domain expert as a first step to fleshing out a domain
    • Leads to a good model if you plan on using event sourcing
    • 1. Model the events in time order (verb, past tense)
    • 2. Model the commands that would create the events (imperative exhortation)
    • 3. Model the aggregates that would participate in the command/event (noun)

Modules (namespaces)

  • Name the modules as per the ubiquitous language (e.g. aggregates or concepts to group an aggregate)
  • Can have child namespaces e.g. concept -> aggregate
  • Don’t create modules based on type of component (e.g. include services with the entities)
  • Try not to couple between modules and where there are dependencies try and use acyclic graphs

Factories

  • Where possible construct objects inside of domain services and entities so the ubiquitous language is expressed
  • If you have all the parameters and the construction is simple it’s ok to new up an object in an app service

Acceptance Tests Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Acceptance Tests Structure

When writing high level acceptance tests (as opposed to unit tests) I will always try to use separate methods for the Given, When and Then since usually the Given and possibly When are more complex so there might need to be multiple methods for each. My favourite framework for writing tests with this structure by far is Bddfy (disclaimer: I am a core contributor of the TestStack organisation). When I say high-level acceptance tests I usually refer to automated UI / full system tests, but it’s possible they could also be complex subcutaneous tests (where subcutaneous tests fit in is part of my thinking I’m not quite sure on yet).

I’ve said before and I still maintain that consistency is the most important aspect when it comes to keeping a software application maintainable so I think that within a particular set of tests if you are writing some that are single method Arrange, Act, Assert tests then you shouldn’t mix those tests with something like Bddfy since it’s wildly different. I feel that the techniques I described for structuring tests using test-per-class in the last post in the series is OK to mix with AAA tests though as I discussed in that post.

The above two paragraphs have led me to the following thoughts:

  • I keep my high-level acceptance tests in a separate project from my unit/integration/etc. tests since they are:
    • Inconsistently specified as discussed above
    • As seen below the way I structure the tests into namespaces / folders and the way I name the test class is very different too
    • They have a different purpose and intent – one is to check your implementation is sound / help design the technical implementation and the other is to specify/check the business requirements of the system i.e. the concept of an executable specification
  • If possible use something like Bddfy and a Specification base class (see below) that allows you to specify the implementation of your scenario
    • Yes I know about SpecFlow, but I don’t think that the maintenance overhead is worth it unless you actually have your product owner writing the specifications, but by all accounts I’ve heard (and based on my experiences) it’s tricky for them to get it right and the developers end up writing the scenarios anyway – do yourself a favour and use a framework that is built for developers and get the devs to sit with the product owner to write the test - that’s what they are good at!
    • One of the many cool things about Bddfy is its reporting functionality – out of the box it generates a HTML report, but it’s also flexible enough to allow you to define your own output; I think this fits in nicely with the idea of an executable specification
  • I’ve used the base class shown below to make it really easy to define Bddfy tests (then you simply need to specify methods according to the Bddfy conventions and it will automatically pick them up
    • If you want to reuse Given’s, When’s or Then’s then simply use inheritance (e.g. you might define an AuthenticationUserSpecification that extends Specification and provides a GivenUserIsAuthenticated method)
    • If you need to use a data-driven test then see below for an example of how to do it

Basic Specification base class

You can add any setup / teardown that is required to this for your purposes or wrap the run method code if needed (e.g. for automated UI tests I catch any exception from the this.BDDfy() call and take a screenshot).

public abstract class Specification
{
    [Test]
    public virtual void Run()
    {
        this.BDDfy();
    }
}

For a more advanced example of this base class including a generic version that identifies the SUT and provides auto-mocking (in this case the tests are unit tests rather than acceptance tests) check out the TestStack.Seleno tests.

Example of extracting common parts of scenarios

Here is an example of how you might pull out common logic into a customised specification base class. In this case it is demonstrating what it might look like to have a base class for when the scenario starts with the user not being logged in when using the TestStack.Seleno library.

public abstract class UnauthenticatedSpecification : Specification
{
    protected MyAppPage InitialPage;

    public void GivenNotLoggedIn()
    {
        InitialPage = Host.Instance
            .NavigateToInitialPage<MyAppPage>()
            .Logout();
    }
}

Example test class

Here is an example of what an actual test class might look like. In this case it is a test that ensures a user can register for an account on the site and extends the example UnauthenticatedSpecification above. There is some code missing here about how the page objects are implemented, but it’s fairly readable so you should be able to follow what’s happening:

public class ValidRegistration : UnauthenticatedSpecification
{
    private RegistrationViewModel _model;
    private MyAppPage _page;
    private Member _member;

    public void AndGivenValidSubmissionData()
    {
        _model = GetValidViewModel();
    }

    public void AndGivenUserHasntAlreadySignedUp()
    {
        // In this instance the Specification base class has helpfully exposed an NHibernate Session object,
        //  which is connected to the same database as the application being tested
        Session.Delete(Session.Query<Member>().SingleOrDefault(m => m.Email == _model.Email));
    }

    public void WhenVisitingTheJoinPageFillingInTheFormAndSubmitting()
    {
        _page = InitialPage
            .Register()
            .Complete(_model);
    }

    public void ThenTheMemberWasCreated()
    {
        _member = Session.Query<Member>()
            .Single(m => m.Email == _model.Email);
    }

    public void AndTheMembersNameIsCorrect()
    {
        _member.FirstName.ShouldBe(_model.FirstName);
        _member.LastName.ShouldBe(_model.LastName);
    }

    public void AndTheMemberCanLogIn()
    {
        _page.Login()
            .Successfully(_model.Email, _model.Password);
    }

    private RegistrationViewModel GetValidViewModel()
    {
        // This is using the NBuilder library, which automatically populates public properties with values
        // I'm overriding the email and password to make them valid for the form submission
        return Builder<RegistrationViewModel>.CreateNew()
            .With(x => x.Email = "email@domain.tld")
            .With(x => x.Password = "P@ssword!")
            .Build();
    }
}

Demonstration of data-driven test

I mentioned above that it was possible to have a data-driven test with the Specification base class. I can do this by using a hack that Jake Ginnivan showed me when we were creating acceptance tests for GitHubFlowVersion for XUnit and a technique I have found works for NUnit. I’ve put a full code example in a gist, but see below for an abridged sample.

I should note I’ve only tried these techniques with the ReSharper test runner so it’s possible they function differently with other runners. Another option would be to not inherit the base class for the situations where you need data-driven tests, but I typically find that I would then loose a bunch of useful setup/functionality when doing that; YMMV.

NUnit

You might have noticed I set the Run method in the Specification base class to virtual. If you do this then you can do the following in your test class and it will ignore the test method in the base class and use the data-driven one in this class – it does show the base test up as being ignored though – you could also assign a category to the test and not run that category:

    [Test]
    [TestCase(1)]
    [TestCase(2)]
    [TestCase(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    [Ignore]
    protected override void Run() { }

XUnit

You don’t have to have the base Run method virtual for this one (but it doesn’t hurt if it is either). Also, the test from the base class comes up as Inconclusive in the ReSharper test runner (just ignore it):

    [Theory]
    [InlineData(1)]
    [InlineData(2)]
    [InlineData(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    public new void Run() {}

Namespaces / folder structure

When structuring the acceptance test project I like to have a folder/namespace called Specifications and underneath that a folder for each feature being tested. Then there can be a file within each feature for each scenario being tested (I generally name the scenario with a short name to easily identify it).

So, for instance the valid registration test above might be found at Specifications.Registration.ValidRegistration.

General Test Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

General Test Structure

Recently I found myself experimenting with demon-coder, fellow Readifarian, and all-round-nice-guy Ben Scott (o/ – that’s for Ben; he will understand) with trying to extract shared Given, When and/or Then sections of tests by using a pattern whereby the Given and possibly When of a test happens in the test setup (or constructor if using XUnit). This allows each assertion (or Then) that you want to make to be in a separate test method and avoids both the problem of multiple somewhat/completely unrelated asserts in a single test and massive duplication of test setup.

I realise there are other ways of dealing with the duplication such as abstracting common logic into private methods (this is the technique I have used in the past), but I’ve found the above solution to be much nicer / cleaner and clearer in intent. It’s also worth noting that there are frameworks that directly support testing with the Given, When, Then structure and I cover that further in a later post in this series (as well as why I don’t just use them all the time).

I’ve created a set of code samples to illustrate the different techniques at https://gist.github.com/robdmoore/8634204:

  • 01_Implementation.cs contains an example mapper class to be tested – it is mapping from some sort of measurement of a value that has been taken (say, from an API or a hardware device) with a confidence and some kind of identifier to an object that represents the identity (broken down into component parts), the measurement value and an adjusted value (based on the confidence). It’s a fairly contrived example, but hopefully it’s understandable enough that it illustrates my point.
  • 02_MultipleAsserts.cs shows a single Arrange Act Assert style test that contains multiple asserts for each of the mapped properties and has description strings for each assert so that you can tell which assert was the one that failed.
  • 03_RelatedMultipleAsserts.cs partially addresses the main problem in the first test (multiple somewhat unrelated asserts) by grouping the related asserts together – this introduces repetition in the tests though.
  • 04_AbstractedCommonCode reduces the duplication by abstracting out the common logic into private methods and provides a separate test for each assert – because of this the assertions don’t need a description (since there is one assert per test), but the test class is a lot more verbose and there is duplication both across test names and test content.
  • 05_GivenAndWhenInSetup.cs demonstrates the technique I describe above; in this case the constructor of the class (it also could have been a [SetUp] method) contains both the Arrange/Given and Act/When part of the test and each test method is both very slim (containing just the assertion) and is named with just the Then.
    • In NUnit you can use [SetUp] and in XUnit the constructor, but it’s worth noting that this means that code will run for every test (which is usually fine), however if the code you are testing takes a reasonable amount of time to run then you can use a constructor or [TestFixtureSetUp] in NUnit or a static constructor (if you need to also use the constructor (say, to perform the When or allow for slightly different behaviour across inherited classes when using inheritance) since the fixtures get injected after the constructor is called) or IUseFixture<TFixture> in XUnit.
    • It’s worth noting this class is still a lot bigger than the single AAA test , but for a situation like this I feel it’s a lot more understandable and better encapsulates the behaviour being tested; this technique also comes into it’s own when there is duplication of Given and/or When logic across multiple AAA tests (you can use inheritance to reuse or extend the Given/When code)
  • 06_ConcernForBaseClass.cs shows a variation of the above strategy that provides a common base class that can be used in tests to help make the Given and When code more explicit as well as identifying what the subject under test is.

Another technique to reduce the setup of tests is to use AutoFixture’s auto data attributes (XUnit and NUnit are supported) to inject any variables that you need into the test. I’m not going to cover that in this post at this stage because a) I haven’t used it in anger and b) I suspect it’s too complex for teams that aren’t all experienced (i.e. it’s a bit too “magic”). It is very cool though so I highly encourage you to at least evaluate it as an option.

There are a number of things to note about what I’ve been doing with this technique so far:

  • We still managed to keep the convention of having a test described by Given (where appropriate; sometimes you don’t need the Given), When and Then by ensuring that the combination of the class name and the test name cater for them
    • e.g. a class called GivenX_WhenY or just WhenY with a bunch of test methods called ThenZ or class called GivenX with a bunch of test methods called WhenY_ThenZ
  • You can reuse and extend common parts of your test logic by using inheritance e.g. define a base class with common Given and/or When and extend it for different variations of the Given and either reuse Then’s from the base class and/or define new ones for each inheriting class
    • I’ve seen some people use this technique with 5 levels deep of nested classes with one line of code being reused in some of the hierarchy; while I’m a massive fan of DRY I think that there is a point where you can go overboard and in the process lose understandability – 5 level deep nested test classes is in my opinion completely overboard
  • An interesting side-effect of this approach is that rather than having one test class per thing being tested you end up with multiple classes
    • I have always followed a convention of putting a test file in the same namespace as the class being tested (except in the Test project and post-fixing with Tests) e.g. MyApp.Commands.CreateInvoiceCommand would be tested by MyApp.Tests.Commands.CreateInvoiceCommandTests, but that doesn’t work when you have multiple test files so what I’ve taken to doing is making a folder at MyApp.Tests\Commands\CreateInvoiceCommandTests, which contains all the test classes
    • This allows me to mix and match folders (where needed) and single test classes (sometimes a single class is enough so I don’t bother creating a folder) while keeping it clear and consistent how and where to find the tests
  • Not all tests need to be written in this way – sometimes a simple, single AAA test (or a bunch of them) is enough
    • As you can see from the above code samples the AAA tests tend to be terser and quicker to write so when there isn’t much duplicated logic across tests and you only need one assertion or the assertions are one logical assertion (i.e. belong in a single test) then there is no reason to stray from single AAA tests in my opinion
    • I don’t feel that a combination of AAA tests and the common setup tests cause a consistency issue because it’s fairly easy to trace what’s happening and the common setup logic falls within the normal bounds of what you might have for AAA tests anyway
    • I find it’s easier to create data-driven tests using single method tests because you have so much more flexibility to do that using NUnit and XUnit – if you find a good way to do it with test-per-class rather than test-per-method let me know!
      • NUnit’s TestFixture attribute goes some of the way, but it’s not as powerful as something like TestCase orTestCaseSource
  • There is a base class (ConcernFor) that we were using for some of the tests that I included in the last code sample above

Test Naming [Automated Testing Series]

For the last 6 months I’ve been thinking and reading a lot about how best to write automated tests for applications – including data generation, structure, naming, etc. This blog series is a foray into my current thinking (which will probably change over time; for instance I’m looking back at tests I wrote 6 months ago that I thought were awesome and now thinking that I hate them :P). I’m inviting you to join me on this journey by publishing this series and I encourage thoughtful comments to provoke further thinking and discussion.

Test Naming

I have always been a fan of the Arrange Act Assert structure within a test and started by naming my test classes like <ThingBeingTested>Should with each test method named like A_statement_indicating_what_the_thing_being_tested_should_do. You can see examples of this type of approach in some of my older repositories such as TestStack.FluentMVCTesting.

More recently I’ve started naming my tests with a Given, When, Then description (be it an acceptance test or a unit test). I like the Given, When, Then method naming because:

  • Generally the Given will map directly to the Arrange, The When will map directly to the Act and the Then will map directly to the Assert – this provides a way of quickly cross checking that a test does what the name suggests it should by comparing the name to the implementation
    • I find this really useful when reviewing pull requests to quickly understand that the author of a test didn’t make a mistake and structured their test correctly
    • e.g. it’s probably wrong if there is a Given in the name, but no clear Arrange section
  • It requires that you put more thought into the test name, which forces you to start by thinking about what scenario you are trying to test rather than “phoning in” an arbitrary test name and diving into the implementation and focussing on how you are testing it
    • My gut feel along with anecdotal evidence from the last few projects I’ve worked on – with developers of varying skill levels – suggests this leads to better written, more understandable tests
    • Side note: I really like the idea discussed by Grzegorz Gałęzowski in his TDD e-book about the order in which you should write a TDD test (from the section “Start From The End”) – he suggests you start with the Then section and then work backwards from there
      • I can’t honestly say I usually do this though, but rather just that it sounds like a good idea; generally I know exactly what I want to write when writing a test (yes, it’s possible I’m just naive and/or arrogant ;))

To clarify, I still use Arrange, Act, Assert in my tests. I could write a whole blog post on how I structure the AAA sections in my tests, but luckily I don’t have to because Mark Seemann has already written a really good post that mimics my thoughts exactly.

Windows Azure Whitepapers

My latest whitepaper for my employer, Readify, has been published. That makes two whitepapers I’ve written now.

  • Cloud and Windows Azure Introduction
    Cloud Computing is an often talked about and often misunderstood phenomenon. While many people disagree about what it is, most people agree that it is a revolutionary shift in the way computing resources are consumed and delivered and will have a big impact on the future. This paper consists of two parts; firstly, it discusses what cloud computing is, what benefits cloud computing can deliver to your business and what things to look out for when making a decision about cloud computing. The second part of the paper gives an overview of the Microsoft Windows Azure cloud computing platform and outlines the benefits that it can deliver to your business.
  • Azure Web Apps: Developing and Deploying Web Applications in Windows Azure
    When deploying web applications in Windows Azure there are a number of options available to you. It’s not always immediately clear from the documentation which of the options are the right ones to use in any given situation.This whitepaper will explain the three main options that are available in Windows Azure to deploy web applications and will cover the similarities and differences between them in order to make the decision of which to use easier. It will also cover the different deployment strategies available to you after you have made the decision.

Blog about software engineering, web development, agile, C#, ASP.NET and Windows Azure.