Acceptance Tests Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Acceptance Tests Structure

When writing high level acceptance tests (as opposed to unit tests) I will always try to use separate methods for the Given, When and Then since usually the Given and possibly When are more complex so there might need to be multiple methods for each. My favourite framework for writing tests with this structure by far is Bddfy (disclaimer: I am a core contributor of the TestStack organisation). When I say high-level acceptance tests I usually refer to automated UI / full system tests, but it’s possible they could also be complex subcutaneous tests (where subcutaneous tests fit in is part of my thinking I’m not quite sure on yet).

I’ve said before and I still maintain that consistency is the most important aspect when it comes to keeping a software application maintainable so I think that within a particular set of tests if you are writing some that are single method Arrange, Act, Assert tests then you shouldn’t mix those tests with something like Bddfy since it’s wildly different. I feel that the techniques I described for structuring tests using test-per-class in the last post in the series is OK to mix with AAA tests though as I discussed in that post.

The above two paragraphs have led me to the following thoughts:

  • I keep my high-level acceptance tests in a separate project from my unit/integration/etc. tests since they are:
    • Inconsistently specified as discussed above
    • As seen below the way I structure the tests into namespaces / folders and the way I name the test class is very different too
    • They have a different purpose and intent – one is to check your implementation is sound / help design the technical implementation and the other is to specify/check the business requirements of the system i.e. the concept of an executable specification
  • If possible use something like Bddfy and a Specification base class (see below) that allows you to specify the implementation of your scenario
    • Yes I know about SpecFlow, but I don’t think that the maintenance overhead is worth it unless you actually have your product owner writing the specifications, but by all accounts I’ve heard (and based on my experiences) it’s tricky for them to get it right and the developers end up writing the scenarios anyway – do yourself a favour and use a framework that is built for developers and get the devs to sit with the product owner to write the test – that’s what they are good at!
    • One of the many cool things about Bddfy is its reporting functionality – out of the box it generates a HTML report, but it’s also flexible enough to allow you to define your own output; I think this fits in nicely with the idea of an executable specification
  • I’ve used the base class shown below to make it really easy to define Bddfy tests (then you simply need to specify methods according to the Bddfy conventions and it will automatically pick them up
    • If you want to reuse Given’s, When’s or Then’s then simply use inheritance (e.g. you might define an AuthenticationUserSpecification that extends Specification and provides a GivenUserIsAuthenticated method)
    • If you need to use a data-driven test then see below for an example of how to do it

Basic Specification base class

You can add any setup / teardown that is required to this for your purposes or wrap the run method code if needed (e.g. for automated UI tests I catch any exception from the this.BDDfy() call and take a screenshot).

public abstract class Specification
{
    [Test]
    public virtual void Run()
    {
        this.BDDfy();
    }
}

For a more advanced example of this base class including a generic version that identifies the SUT and provides auto-mocking (in this case the tests are unit tests rather than acceptance tests) check out the TestStack.Seleno tests.

Example of extracting common parts of scenarios

Here is an example of how you might pull out common logic into a customised specification base class. In this case it is demonstrating what it might look like to have a base class for when the scenario starts with the user not being logged in when using the TestStack.Seleno library.

public abstract class UnauthenticatedSpecification : Specification
{
    protected MyAppPage InitialPage;

    public void GivenNotLoggedIn()
    {
        InitialPage = Host.Instance
            .NavigateToInitialPage<MyAppPage>()
            .Logout();
    }
}

Example test class

Here is an example of what an actual test class might look like. In this case it is a test that ensures a user can register for an account on the site and extends the example UnauthenticatedSpecification above. There is some code missing here about how the page objects are implemented, but it’s fairly readable so you should be able to follow what’s happening:

public class ValidRegistration : UnauthenticatedSpecification
{
    private RegistrationViewModel _model;
    private MyAppPage _page;
    private Member _member;

    public void AndGivenValidSubmissionData()
    {
        _model = GetValidViewModel();
    }

    public void AndGivenUserHasntAlreadySignedUp()
    {
        // In this instance the Specification base class has helpfully exposed an NHibernate Session object,
        //  which is connected to the same database as the application being tested
        Session.Delete(Session.Query<Member>().SingleOrDefault(m => m.Email == _model.Email));
    }

    public void WhenVisitingTheJoinPageFillingInTheFormAndSubmitting()
    {
        _page = InitialPage
            .Register()
            .Complete(_model);
    }

    public void ThenTheMemberWasCreated()
    {
        _member = Session.Query<Member>()
            .Single(m => m.Email == _model.Email);
    }

    public void AndTheMembersNameIsCorrect()
    {
        _member.FirstName.ShouldBe(_model.FirstName);
        _member.LastName.ShouldBe(_model.LastName);
    }

    public void AndTheMemberCanLogIn()
    {
        _page.Login()
            .Successfully(_model.Email, _model.Password);
    }

    private RegistrationViewModel GetValidViewModel()
    {
        // This is using the NBuilder library, which automatically populates public properties with values
        // I'm overriding the email and password to make them valid for the form submission
        return Builder<RegistrationViewModel>.CreateNew()
            .With(x => x.Email = "email@domain.tld")
            .With(x => x.Password = "P@ssword!")
            .Build();
    }
}

Demonstration of data-driven test

I mentioned above that it was possible to have a data-driven test with the Specification base class. I can do this by using a hack that Jake Ginnivan showed me when we were creating acceptance tests for GitHubFlowVersion for XUnit and a technique I have found works for NUnit. I’ve put a full code example in a gist, but see below for an abridged sample.

I should note I’ve only tried these techniques with the ReSharper test runner so it’s possible they function differently with other runners. Another option would be to not inherit the base class for the situations where you need data-driven tests, but I typically find that I would then loose a bunch of useful setup/functionality when doing that; YMMV.

NUnit

You might have noticed I set the Run method in the Specification base class to virtual. If you do this then you can do the following in your test class and it will ignore the test method in the base class and use the data-driven one in this class – it does show the base test up as being ignored though – you could also assign a category to the test and not run that category:

    [Test]
    [TestCase(1)]
    [TestCase(2)]
    [TestCase(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    [Ignore]
    protected override void Run() { }

XUnit

You don’t have to have the base Run method virtual for this one (but it doesn’t hurt if it is either). Also, the test from the base class comes up as Inconclusive in the ReSharper test runner (just ignore it):

    [Theory]
    [InlineData(1)]
    [InlineData(2)]
    [InlineData(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    public new void Run() {}

Namespaces / folder structure

When structuring the acceptance test project I like to have a folder/namespace called Specifications and underneath that a folder for each feature being tested. Then there can be a file within each feature for each scenario being tested (I generally name the scenario with a short name to easily identify it).

So, for instance the valid registration test above might be found at Specifications.Registration.ValidRegistration.

Test Data Generation the right way: Object Mother + Test Data Builders + NSubstitute + NBuilder

Generating test data for automated testing is an area that I have noticed can get very, very tedious when your application gets more and more complex. More importantly, it’s an area that I have noticed results in hard-to-maintain test logic. If you are a proponent of treating test code with the same importance of production code like I am, then you will want to refactor your test logic to make it more simple, maintainable and DRY.

In my last two projects I have been working with my team to come up with a good solution for this problem and I’m pretty stoked with what we’ve got so far. While I certainly had my hand in the end result, I certainly can’t claim all of the credit – far from it! Big props to my team mates Matt Kocaj and Poya Manouchehri for significant contributions to this.

Newing up objects everywhere

If you organically grow your automated test suite without thinking about how you are generating your test data then it’s likely you will end up with a lot of tests that have arrange sections full of object instantiation, and these sections are likely to get more and more complex as your object model gets complex (what if you need to have a customer that has 3 orders, each with 3 products?).

If you are diligent with refactoring your tests then you will likely start pulling out common instantiations to private methods (or similar), but even then you are likely to notice the following patterns start to emerge:

  • If you need to change the constructor of one of your classes there will be many, many instantiations using that constructor across your test project and so that refactor becomes a pain (and you are likely to do hacks like add default values that you wouldn’t otherwise when adding new parameters to make it easier).
  • There is likely to be a lot of repetition across different test classes in the objects that are created.
  • Building lists is really verbose and will likely feature a lot of repetition / hard-to-read code.
    • You can bring in libraries like NBuilder to make this easier and more readable.
    • You can’t use the best parts of NBuilder if you are not using public property setters so you can enforce domain invariants.
  • Your approach to generating data will be largely inconsistent across test classes – some will have private methods, some will do it inline, some might delegate to factories or helper classes etc. – this makes the code hard to understand and maintain.
  • It’s not immediately clear which of the parameters you are passing into the constructor are there to meet the requirements of the call and which are there because the test dictates it – this means the intent of your test is somewhat obscured

Object Mother

A good approach to start to address the problem is to implement the Object Mother pattern. This allows you to quickly and tersely create pre-canned objects and give them a descriptive name. Now instead of having complex chains of object instantiations you have one line of code (e.g. ObjectMother.CustomerWith3FilledOrders), which is a lot more descriptive and your tests become simpler as a result. This also helps ensure that your approach for data generation is consistent, and thus more maintainable.

If you just use Object Mother though, you will likely observe the following problems:

  • While your constructor calls are likely to be in one file now, there is still potentially a lot of calls so it’s still tricky to make constructor changes.
  • As soon as you have a small tweak needed for the data returned there is a tendency to simply create a new Object Mother property.
  • Because of this, the Object Mother class can quickly get unwieldy and a nightmare to maintain and understand – it becomes a god object of sorts.

Test Data Builder

An approach that seems to have stemmed from the Object Mother pattern is the application of the builder pattern to result in the Test Data Builder pattern. This involves creating a builder class responsible for creating a particular type of object via a fluent interface that keeps track of the desired state of the object being built before being asked to actually build the object.

This provides a number of advantages:

  • Your constructor calls will be in your test project once only making them easy to change.
  • You have a readable, discoverable, fluent interface to generate your objects making your tests easy to read/write/maintain.
  • You are consistent in how you generate your test data.
  • It works with objects that you don’t have public property setters for since there is an explicit build action, which can invoke the constructor with the parameters you have set.
  • It’s flexible enough to give you the ability to express and perform actions to your object after it’s constructed – e.g. specify that an order is paid (which might be a Pay() method rather than a constructor parameter; you wouldn’t expect to be able to create an order in the paid state!).
  • Because of this your builder objects provide a blueprint / documentation for how to interact with your domain objects.
  • The test data builder can contain sensible defaults for your object instantiation and thus your test only needs to specify the values that need to be arranged thus making the intent of your test very clear
  • We have added a method to our builders (.AsProxy()) that allows the object returned to be an NSubstitute substitute (a proxy / mock object if you aren’t familiar with that library) rather than a real object (with the public properties set to automatically return the values specified in the builder). See below for a code snippet.
    • This means that when we need substitutes to check whether certain methods are called or to ensure that certain method calls return predetermined values we can generate those substitutes consistently with how we generate the real objects.
    • This is very powerful and improves understandability and maintainability of the tests.
    • In general we try and avoid using substitutes because it means that it’s possible to get objects that violate domain invariants, however, on occasion it’s very useful to properly unit test a domain action when it interacts with other domain objects

Here is a (somewhat contrived) example of the .AsProxy() method described above.

public void Given10YearMembership_WhenCalculatingDiscount_ThenApply15PercentLongMembershipDiscount() {
    var member = MemberBuilder.AsProxy().Build();
    member.GetYearsOfMembership(Arg.Any()).Returns(10);

    var discount = _discountCalculator.CalculateDiscountFor(member, new DateTimeProvider());

    Assert.That(discount, Is.EqualTo(0.15));
}

Combining NBuilder and the Test Data Builders

Another cool thing we have done is to combine our test data builders with NBuilder to allow for terse, expressive generation of lists of objects while still supporting objects that don’t have public property setters. We do this by generating a list of builders with NBuilder and then iterating that list to build each builder and get the actual object. Here is an example:

var members = Builder.CreateListOfSize(4)
    .TheFirst(1).With(b => b.WithFirstName("Rob"))
    .TheNext(2).With(b => b.WithFirstName("Poya"))
    .TheNext(1).With(b => b.WithFirstName("Matt"))
    .Build()
    .Select(b => b.Build());

We have made the above code even terser, by adding some extension methods to allow for this:

var members = MemberBuilder.CreateList(4)
    .TheFirst(1).With(b => b.WithFirstName("Rob"))
    .TheNext(2).With(b => b.WithFirstName("Poya"))
    .TheNext(1).With(b => b.WithFirstName("Matt"))
    .BuildList();

Adding back Object Mother

One of the really nice things about the Object Mother pattern is that you reduce a lot of repetition on your tests by re-using pre-canned objects. It also means the tests are a lot terser since you just specify the name of the pre-canned object that you want and assuming that name describes the pre-canned object well then your test is very readable.

What we have done is combined the maintainability and flexibility that is afforded by the Test Data Builder pattern with the terseness afforded by the Object Mother by using Object Mothers that return Test Data Builders. This overcomes all of the disadvantages noted above with the Object Mother pattern.

In order to make it easy to find objects we also use a static partial class for the Object Mother – this is one of the few really nice uses I’ve seen for partial classes. Basically, we have an ObjectMother folder / namespace in our test project that contains a file for every entity we are generating that includes a static nested class within the root ObjectMother class. This ensures that all test data is gotten by first typing “ObjectMother.” – this makes it really consistent and easily accessible, while not having an unmaintainable mess of a god object.

For instance we might have (simple example):

// Helpers\ObjectMother\Customers.cs
using MyProject.Tests.Helpers.Builders;

namespace MyProject.Tests.Helpers.ObjectMother
{
    public static partial class ObjectMother
    {
        public static class Customers
        {
            public static CustomerBuilder Robert
            {
                get { return new CustomerBuilder().WithFirstName("Robert"); }
            }
            public static CustomerBuilder Matt
            {
                get { return new CustomerBuilder().WithFirstName("Matt"); }
            }
        }
    }
}

If our test requires any small tweaks to the predefined object then you don’t need to necessarily add another property to your object mother – you can use the methods on the builder, e.g. ObjectMother.Customers.Robert.WithLastName("Moore").Build();

We generally stick to using properties rather than methods in the object mothers since the builders mean that it’s not necessary to pass any parameters to get your pre-canned object out, but occasionally we do have some that have methods when there is something meaningful to configure that makes sense in the object mother e.g. ObjectMother.Members.MemberWithXProducts(int numProducts).

Making the Test Data Builders terse

One argument against this approach, particularly when you first start out your test project (or indeed if you decide to refactor it to use this technique after the fact) is that you are writing a lot of code to get it all up and running. We decided to avoid that by using a number of techniques:

  • We set any reasonable defaults in our builder constructor so new CustomerBuilder().Build() will give a reasonable object (unless for that particular type of object there are any properties that make sense to always have to specify, in which case we don’t add a default for that property).
  • We have created a base class that allows us to set/get the property values into a dictionary using a lambda expression that identifies the property whose value is being set/retrieved – this reduces the code in the builder by eliminating the need for most (sometimes there is still a need to keep state in the builder where you are storing something not expressed by one of the properties) of the private variables in the builder.
  • We only add fluent methods for the properties we are actually using in our tests that point in time – this means we don’t have dead methods lying around in the builders and initially they are very terse.
  • We have a base class that defines a lot of the common infrastructure behind defining a builder (including the ability to return a proxy object and the ability to create a list of builders using NBuilder – I’ve open sourced the code as NTestDataBuilder).

This generally means that we can be up and running with an object mother and builder for a new entity within a minute and it’s well worth doing it from the start with all our objects to keep consistency in the codebase and ensure all tests are maintainable, terse and readable from the start.

Here is an example of what I mean from the NTestDataBuilder library I have released:

class CustomerBuilder : TestDataBuilder<Customer, CustomerBuilder>
{
    public CustomerBuilder()
    {
        WithFirstName("Rob");
        WithLastName("Moore");
        WhoJoinedIn(2013);
    }

    public CustomerBuilder WithFirstName(string firstName)
    {
        Set(x => x.FirstName, firstName);
        return this;
    }

    public CustomerBuilder WithLastName(string lastName)
    {
        Set(x => x.LastName, lastName);
        return this;
    }

    public CustomerBuilder WhoJoinedIn(int yearJoined)
    {
        Set(x => x.YearJoined, yearJoined);
        return this;
    }

    protected override Customer BuildObject()
    {
        return new Customer(
            Get(x => x.FirstName),
            Get(x => x.LastName),
            Get(x => x.YearJoined)
        );
    }
}

Consistency == Maintainability

I think I can categorically say that the most important thing I’ve learnt about software engineering is that consistency is the most important thing in software engineering that leads to something being maintainable. It’s something I feel very strongly about.

If you have awful code or non-standard concepts in your code then as long as you apply the awful code or concepts consistently across your application then as soon as anyone that needs to maintain that code gets their head around what’s going on they can maintain the code effectively. It comes down to readability and comprehension. The quicker and easier it is for someone to read a code snippet and understand what is going on (no matter how badly it makes their eyes bleed) the easier it is for them to identify problems and introduce a fix.

I’m not saying that writing best practice code and using principles like DI, DRY and YAGNI aren’t going to make something maintainable because they do and they are critically important when you are developing software! What I am saying though is they don’t have as big an impact on maintainability as consistency. Furthermore, from a pragmatic perspective, sometimes you inherit code that wasn’t written with best practice approaches and you need to maintain it; yes you could spend a whole heap of time rewriting it, but at the end of the day as long as it’s consistent then it’s maintainable and there may well be more valuable uses of your time.

It does raise an interesting question though about refactoring. If you do have a crappy codebase you have inherited and you decide it will deliver value to do a major refactor then it’s important to ensure that you refactor in a way that leaves the codebase in a consistent state!

Coding Naming Conventions: Australian (or British) vs. American English

I really dislike American spelling of words. Nothing personal, but I guess growing up with knowledge about how words are spelt and then seeing words spelt “wrong” is frustrating (particularly when you can’t turn off a US spell-checker and it keeps “auto-correcting” your words to incorrect spelling).

Something my team has a discussion on this week was what our naming convention should be for words that have different spelling in US English vs Australian English (the most common one that has come up lately is finalise vs finalize).
Continue reading “Coding Naming Conventions: Australian (or British) vs. American English”