Acceptance Tests Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

Acceptance Tests Structure

When writing high level acceptance tests (as opposed to unit tests) I will always try to use separate methods for the Given, When and Then since usually the Given and possibly When are more complex so there might need to be multiple methods for each. My favourite framework for writing tests with this structure by far is Bddfy (disclaimer: I am a core contributor of the TestStack organisation). When I say high-level acceptance tests I usually refer to automated UI / full system tests, but it’s possible they could also be complex subcutaneous tests (where subcutaneous tests fit in is part of my thinking I’m not quite sure on yet).

I’ve said before and I still maintain that consistency is the most important aspect when it comes to keeping a software application maintainable so I think that within a particular set of tests if you are writing some that are single method Arrange, Act, Assert tests then you shouldn’t mix those tests with something like Bddfy since it’s wildly different. I feel that the techniques I described for structuring tests using test-per-class in the last post in the series is OK to mix with AAA tests though as I discussed in that post.

The above two paragraphs have led me to the following thoughts:

  • I keep my high-level acceptance tests in a separate project from my unit/integration/etc. tests since they are:
    • Inconsistently specified as discussed above
    • As seen below the way I structure the tests into namespaces / folders and the way I name the test class is very different too
    • They have a different purpose and intent – one is to check your implementation is sound / help design the technical implementation and the other is to specify/check the business requirements of the system i.e. the concept of an executable specification
  • If possible use something like Bddfy and a Specification base class (see below) that allows you to specify the implementation of your scenario
    • Yes I know about SpecFlow, but I don’t think that the maintenance overhead is worth it unless you actually have your product owner writing the specifications, but by all accounts I’ve heard (and based on my experiences) it’s tricky for them to get it right and the developers end up writing the scenarios anyway – do yourself a favour and use a framework that is built for developers and get the devs to sit with the product owner to write the test – that’s what they are good at!
    • One of the many cool things about Bddfy is its reporting functionality – out of the box it generates a HTML report, but it’s also flexible enough to allow you to define your own output; I think this fits in nicely with the idea of an executable specification
  • I’ve used the base class shown below to make it really easy to define Bddfy tests (then you simply need to specify methods according to the Bddfy conventions and it will automatically pick them up
    • If you want to reuse Given’s, When’s or Then’s then simply use inheritance (e.g. you might define an AuthenticationUserSpecification that extends Specification and provides a GivenUserIsAuthenticated method)
    • If you need to use a data-driven test then see below for an example of how to do it

Basic Specification base class

You can add any setup / teardown that is required to this for your purposes or wrap the run method code if needed (e.g. for automated UI tests I catch any exception from the this.BDDfy() call and take a screenshot).

public abstract class Specification
{
    [Test]
    public virtual void Run()
    {
        this.BDDfy();
    }
}

For a more advanced example of this base class including a generic version that identifies the SUT and provides auto-mocking (in this case the tests are unit tests rather than acceptance tests) check out the TestStack.Seleno tests.

Example of extracting common parts of scenarios

Here is an example of how you might pull out common logic into a customised specification base class. In this case it is demonstrating what it might look like to have a base class for when the scenario starts with the user not being logged in when using the TestStack.Seleno library.

public abstract class UnauthenticatedSpecification : Specification
{
    protected MyAppPage InitialPage;

    public void GivenNotLoggedIn()
    {
        InitialPage = Host.Instance
            .NavigateToInitialPage<MyAppPage>()
            .Logout();
    }
}

Example test class

Here is an example of what an actual test class might look like. In this case it is a test that ensures a user can register for an account on the site and extends the example UnauthenticatedSpecification above. There is some code missing here about how the page objects are implemented, but it’s fairly readable so you should be able to follow what’s happening:

public class ValidRegistration : UnauthenticatedSpecification
{
    private RegistrationViewModel _model;
    private MyAppPage _page;
    private Member _member;

    public void AndGivenValidSubmissionData()
    {
        _model = GetValidViewModel();
    }

    public void AndGivenUserHasntAlreadySignedUp()
    {
        // In this instance the Specification base class has helpfully exposed an NHibernate Session object,
        //  which is connected to the same database as the application being tested
        Session.Delete(Session.Query<Member>().SingleOrDefault(m => m.Email == _model.Email));
    }

    public void WhenVisitingTheJoinPageFillingInTheFormAndSubmitting()
    {
        _page = InitialPage
            .Register()
            .Complete(_model);
    }

    public void ThenTheMemberWasCreated()
    {
        _member = Session.Query<Member>()
            .Single(m => m.Email == _model.Email);
    }

    public void AndTheMembersNameIsCorrect()
    {
        _member.FirstName.ShouldBe(_model.FirstName);
        _member.LastName.ShouldBe(_model.LastName);
    }

    public void AndTheMemberCanLogIn()
    {
        _page.Login()
            .Successfully(_model.Email, _model.Password);
    }

    private RegistrationViewModel GetValidViewModel()
    {
        // This is using the NBuilder library, which automatically populates public properties with values
        // I'm overriding the email and password to make them valid for the form submission
        return Builder<RegistrationViewModel>.CreateNew()
            .With(x => x.Email = "email@domain.tld")
            .With(x => x.Password = "P@ssword!")
            .Build();
    }
}

Demonstration of data-driven test

I mentioned above that it was possible to have a data-driven test with the Specification base class. I can do this by using a hack that Jake Ginnivan showed me when we were creating acceptance tests for GitHubFlowVersion for XUnit and a technique I have found works for NUnit. I’ve put a full code example in a gist, but see below for an abridged sample.

I should note I’ve only tried these techniques with the ReSharper test runner so it’s possible they function differently with other runners. Another option would be to not inherit the base class for the situations where you need data-driven tests, but I typically find that I would then loose a bunch of useful setup/functionality when doing that; YMMV.

NUnit

You might have noticed I set the Run method in the Specification base class to virtual. If you do this then you can do the following in your test class and it will ignore the test method in the base class and use the data-driven one in this class – it does show the base test up as being ignored though – you could also assign a category to the test and not run that category:

    [Test]
    [TestCase(1)]
    [TestCase(2)]
    [TestCase(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    [Ignore]
    protected override void Run() { }

XUnit

You don’t have to have the base Run method virtual for this one (but it doesn’t hurt if it is either). Also, the test from the base class comes up as Inconclusive in the ReSharper test runner (just ignore it):

    [Theory]
    [InlineData(1)]
    [InlineData(2)]
    [InlineData(5)]
    public void RunSpecification(int someValue)
    {
        _privatePropertyTheTestMethodsUse = someValue;
        base.Run();
    }

    public new void Run() {}

Namespaces / folder structure

When structuring the acceptance test project I like to have a folder/namespace called Specifications and underneath that a folder for each feature being tested. Then there can be a file within each feature for each scenario being tested (I generally name the scenario with a short name to easily identify it).

So, for instance the valid registration test above might be found at Specifications.Registration.ValidRegistration.

General Test Structure [Automated Testing Series]

This is part of my ongoing Automated Testing blog series:

General Test Structure

Recently I found myself experimenting with demon-coder, fellow Readifarian, and all-round-nice-guy Ben Scott (o/ – that’s for Ben; he will understand) with trying to extract shared Given, When and/or Then sections of tests by using a pattern whereby the Given and possibly When of a test happens in the test setup (or constructor if using XUnit). This allows each assertion (or Then) that you want to make to be in a separate test method and avoids both the problem of multiple somewhat/completely unrelated asserts in a single test and massive duplication of test setup.

I realise there are other ways of dealing with the duplication such as abstracting common logic into private methods (this is the technique I have used in the past), but I’ve found the above solution to be much nicer / cleaner and clearer in intent. It’s also worth noting that there are frameworks that directly support testing with the Given, When, Then structure and I cover that further in a later post in this series (as well as why I don’t just use them all the time).

I’ve created a set of code samples to illustrate the different techniques at https://gist.github.com/robdmoore/8634204:

  • 01_Implementation.cs contains an example mapper class to be tested – it is mapping from some sort of measurement of a value that has been taken (say, from an API or a hardware device) with a confidence and some kind of identifier to an object that represents the identity (broken down into component parts), the measurement value and an adjusted value (based on the confidence). It’s a fairly contrived example, but hopefully it’s understandable enough that it illustrates my point.
  • 02_MultipleAsserts.cs shows a single Arrange Act Assert style test that contains multiple asserts for each of the mapped properties and has description strings for each assert so that you can tell which assert was the one that failed.
  • 03_RelatedMultipleAsserts.cs partially addresses the main problem in the first test (multiple somewhat unrelated asserts) by grouping the related asserts together – this introduces repetition in the tests though.
  • 04_AbstractedCommonCode reduces the duplication by abstracting out the common logic into private methods and provides a separate test for each assert – because of this the assertions don’t need a description (since there is one assert per test), but the test class is a lot more verbose and there is duplication both across test names and test content.
  • 05_GivenAndWhenInSetup.cs demonstrates the technique I describe above; in this case the constructor of the class (it also could have been a [SetUp] method) contains both the Arrange/Given and Act/When part of the test and each test method is both very slim (containing just the assertion) and is named with just the Then.
    • In NUnit you can use [SetUp] and in XUnit the constructor, but it’s worth noting that this means that code will run for every test (which is usually fine), however if the code you are testing takes a reasonable amount of time to run then you can use a constructor or [TestFixtureSetUp] in NUnit or a static constructor (if you need to also use the constructor (say, to perform the When or allow for slightly different behaviour across inherited classes when using inheritance) since the fixtures get injected after the constructor is called) or IUseFixture<TFixture> in XUnit.
    • It’s worth noting this class is still a lot bigger than the single AAA test , but for a situation like this I feel it’s a lot more understandable and better encapsulates the behaviour being tested; this technique also comes into it’s own when there is duplication of Given and/or When logic across multiple AAA tests (you can use inheritance to reuse or extend the Given/When code)
  • 06_ConcernForBaseClass.cs shows a variation of the above strategy that provides a common base class that can be used in tests to help make the Given and When code more explicit as well as identifying what the subject under test is.

Another technique to reduce the setup of tests is to use AutoFixture’s auto data attributes (XUnit and NUnit are supported) to inject any variables that you need into the test. I’m not going to cover that in this post at this stage because a) I haven’t used it in anger and b) I suspect it’s too complex for teams that aren’t all experienced (i.e. it’s a bit too “magic”). It is very cool though so I highly encourage you to at least evaluate it as an option.

There are a number of things to note about what I’ve been doing with this technique so far:

  • We still managed to keep the convention of having a test described by Given (where appropriate; sometimes you don’t need the Given), When and Then by ensuring that the combination of the class name and the test name cater for them
    • e.g. a class called GivenX_WhenY or just WhenY with a bunch of test methods called ThenZ or class called GivenX with a bunch of test methods called WhenY_ThenZ
  • You can reuse and extend common parts of your test logic by using inheritance e.g. define a base class with common Given and/or When and extend it for different variations of the Given and either reuse Then’s from the base class and/or define new ones for each inheriting class
    • I’ve seen some people use this technique with 5 levels deep of nested classes with one line of code being reused in some of the hierarchy; while I’m a massive fan of DRY I think that there is a point where you can go overboard and in the process lose understandability – 5 level deep nested test classes is in my opinion completely overboard
  • An interesting side-effect of this approach is that rather than having one test class per thing being tested you end up with multiple classes
    • I have always followed a convention of putting a test file in the same namespace as the class being tested (except in the Test project and post-fixing with Tests) e.g. MyApp.Commands.CreateInvoiceCommand would be tested by MyApp.Tests.Commands.CreateInvoiceCommandTests, but that doesn’t work when you have multiple test files so what I’ve taken to doing is making a folder at MyApp.Tests\Commands\CreateInvoiceCommandTests, which contains all the test classes
    • This allows me to mix and match folders (where needed) and single test classes (sometimes a single class is enough so I don’t bother creating a folder) while keeping it clear and consistent how and where to find the tests
  • Not all tests need to be written in this way – sometimes a simple, single AAA test (or a bunch of them) is enough
    • As you can see from the above code samples the AAA tests tend to be terser and quicker to write so when there isn’t much duplicated logic across tests and you only need one assertion or the assertions are one logical assertion (i.e. belong in a single test) then there is no reason to stray from single AAA tests in my opinion
    • I don’t feel that a combination of AAA tests and the common setup tests cause a consistency issue because it’s fairly easy to trace what’s happening and the common setup logic falls within the normal bounds of what you might have for AAA tests anyway
    • I find it’s easier to create data-driven tests using single method tests because you have so much more flexibility to do that using NUnit and XUnit – if you find a good way to do it with test-per-class rather than test-per-method let me know!
      • NUnit’s TestFixture attribute goes some of the way, but it’s not as powerful as something like TestCase orTestCaseSource
  • There is a base class (ConcernFor) that we were using for some of the tests that I included in the last code sample above

Test Naming [Automated Testing Series]

For the last 6 months I’ve been thinking and reading a lot about how best to write automated tests for applications – including data generation, structure, naming, etc. This blog series is a foray into my current thinking (which will probably change over time; for instance I’m looking back at tests I wrote 6 months ago that I thought were awesome and now thinking that I hate them :P). I’m inviting you to join me on this journey by publishing this series and I encourage thoughtful comments to provoke further thinking and discussion.

Test Naming

I have always been a fan of the Arrange Act Assert structure within a test and started by naming my test classes like <ThingBeingTested>Should with each test method named like A_statement_indicating_what_the_thing_being_tested_should_do. You can see examples of this type of approach in some of my older repositories such as TestStack.FluentMVCTesting.

More recently I’ve started naming my tests with a Given, When, Then description (be it an acceptance test or a unit test). I like the Given, When, Then method naming because:

  • Generally the Given will map directly to the Arrange, The When will map directly to the Act and the Then will map directly to the Assert – this provides a way of quickly cross checking that a test does what the name suggests it should by comparing the name to the implementation
    • I find this really useful when reviewing pull requests to quickly understand that the author of a test didn’t make a mistake and structured their test correctly
    • e.g. it’s probably wrong if there is a Given in the name, but no clear Arrange section
  • It requires that you put more thought into the test name, which forces you to start by thinking about what scenario you are trying to test rather than “phoning in” an arbitrary test name and diving into the implementation and focussing on how you are testing it
    • My gut feel along with anecdotal evidence from the last few projects I’ve worked on – with developers of varying skill levels – suggests this leads to better written, more understandable tests
    • Side note: I really like the idea discussed by Grzegorz Gałęzowski in his TDD e-book about the order in which you should write a TDD test (from the section “Start From The End”) – he suggests you start with the Then section and then work backwards from there
      • I can’t honestly say I usually do this though, but rather just that it sounds like a good idea; generally I know exactly what I want to write when writing a test (yes, it’s possible I’m just naive and/or arrogant ;))

To clarify, I still use Arrange, Act, Assert in my tests. I could write a whole blog post on how I structure the AAA sections in my tests, but luckily I don’t have to because Mark Seemann has already written a really good post that mimics my thoughts exactly.

NQUnit update

I’ve just pushed a new version of NQUnit and NQUnit.NUnit (1.0.5). This update updates the libraries to use the latest version of Watin, NUnit, jQuery and QUnit. I’ve also made some documentation improvements. You can see the latest code and documentation on GitHub.

It’s worth noting that while NQUnit still works perfectly well, there is a better option that I usually recommend in preference: Chutzpah. I’ve updated NQUnit as per pull requests that the library received – while there are still people using the library I’m happy to keep it up to date.

Test Harness for NuGet install PowerShell scripts (init.ps1, install.ps1, uninstall.ps1)

One thing that I find frustrating when creating NuGet packages is the debug experience when it comes to creating the PowerShell install scripts (init.ps1, install.ps1, uninstall.ps1).

In order to make it easier to do the debugging I’ve created a test harness Visual Studio solution that allows you to make changes to a file, compile the solution, run a single command in the package manager and then have the package uninstall and then install again. That way you can change a line of code, do a few key strokes and then see the result straight away.

To see the code you can head to the GitHub repository. The basic instructions are on the readme:

  1. [Once off] Checkout the code
  2. [Once off] Create a NuGet source directory in checkout directory
  3. Repeat in a loop:
    1. Write the code (the structure of the solution is the structure of your nuget package, so put the appropriately named .ps1 scripts in the tools folder)
    2. Compile the solution (this creates a nuget package in the root of the solution with the name Package and version {yymm}.{ddHH}.{mmss}.nupkg – this means that the package version will increase with time so if you install from that directory it will always install the latest build) <F6> or <Ctrl+Shift+B)
    3. Switch to the Package Manager Console <Ctrl+P, Ctrl+M>
    4. [First time] Uninstall-Package Package; Install-Package Package <enter> / [Subsequent times] <Up arrow> <enter>
  4. When done simply copy the relevant files out and reset master to get a clean slate

Other handy hints

Presentation and example code for test fixture data generation

This week I made a presentation to the Perth .NET user group about the content in my test fixture data generation post. As part of that I created some sample code that illustrates the outcome that each of the different techniques I outline in my post post might look – if you are interested in exploring these techniques or seeing some somewhat realistic examples of using the NTestDataBuilder library then I encourage you to take a look.

Furthermore, I’ve just released version 1.0 of the NTestDataBuilder library – my team has been successfully using it for a number of weeks now and I’m happy with the functionality in it.

Enjoy!

Announcing NTestDataBuilder library

Further to the blog post I posted yesterday about the approach to generating test fixture data for automated tests that my team has adopted I have open sourced a base class and other supporting code infrastructure to quickly, tersely and maintainably generate fixture data in a consistent way.

I have released this library to GitHub and NuGet under the name NTestDataBuilder.

The GitHub page has a bunch of code snippets that will get you started quickly. Read my previous blog post to get a feel for how I use the code and in particular how I combine the approach with the Object Mother pattern for best effect.

Enjoy!

Test Data Generation the right way: Object Mother + Test Data Builders + NSubstitute + NBuilder

Generating test data for automated testing is an area that I have noticed can get very, very tedious when your application gets more and more complex. More importantly, it’s an area that I have noticed results in hard-to-maintain test logic. If you are a proponent of treating test code with the same importance of production code like I am, then you will want to refactor your test logic to make it more simple, maintainable and DRY.

In my last two projects I have been working with my team to come up with a good solution for this problem and I’m pretty stoked with what we’ve got so far. While I certainly had my hand in the end result, I certainly can’t claim all of the credit – far from it! Big props to my team mates Matt Kocaj and Poya Manouchehri for significant contributions to this.

Newing up objects everywhere

If you organically grow your automated test suite without thinking about how you are generating your test data then it’s likely you will end up with a lot of tests that have arrange sections full of object instantiation, and these sections are likely to get more and more complex as your object model gets complex (what if you need to have a customer that has 3 orders, each with 3 products?).

If you are diligent with refactoring your tests then you will likely start pulling out common instantiations to private methods (or similar), but even then you are likely to notice the following patterns start to emerge:

  • If you need to change the constructor of one of your classes there will be many, many instantiations using that constructor across your test project and so that refactor becomes a pain (and you are likely to do hacks like add default values that you wouldn’t otherwise when adding new parameters to make it easier).
  • There is likely to be a lot of repetition across different test classes in the objects that are created.
  • Building lists is really verbose and will likely feature a lot of repetition / hard-to-read code.
    • You can bring in libraries like NBuilder to make this easier and more readable.
    • You can’t use the best parts of NBuilder if you are not using public property setters so you can enforce domain invariants.
  • Your approach to generating data will be largely inconsistent across test classes – some will have private methods, some will do it inline, some might delegate to factories or helper classes etc. – this makes the code hard to understand and maintain.
  • It’s not immediately clear which of the parameters you are passing into the constructor are there to meet the requirements of the call and which are there because the test dictates it – this means the intent of your test is somewhat obscured

Object Mother

A good approach to start to address the problem is to implement the Object Mother pattern. This allows you to quickly and tersely create pre-canned objects and give them a descriptive name. Now instead of having complex chains of object instantiations you have one line of code (e.g. ObjectMother.CustomerWith3FilledOrders), which is a lot more descriptive and your tests become simpler as a result. This also helps ensure that your approach for data generation is consistent, and thus more maintainable.

If you just use Object Mother though, you will likely observe the following problems:

  • While your constructor calls are likely to be in one file now, there is still potentially a lot of calls so it’s still tricky to make constructor changes.
  • As soon as you have a small tweak needed for the data returned there is a tendency to simply create a new Object Mother property.
  • Because of this, the Object Mother class can quickly get unwieldy and a nightmare to maintain and understand – it becomes a god object of sorts.

Test Data Builder

An approach that seems to have stemmed from the Object Mother pattern is the application of the builder pattern to result in the Test Data Builder pattern. This involves creating a builder class responsible for creating a particular type of object via a fluent interface that keeps track of the desired state of the object being built before being asked to actually build the object.

This provides a number of advantages:

  • Your constructor calls will be in your test project once only making them easy to change.
  • You have a readable, discoverable, fluent interface to generate your objects making your tests easy to read/write/maintain.
  • You are consistent in how you generate your test data.
  • It works with objects that you don’t have public property setters for since there is an explicit build action, which can invoke the constructor with the parameters you have set.
  • It’s flexible enough to give you the ability to express and perform actions to your object after it’s constructed – e.g. specify that an order is paid (which might be a Pay() method rather than a constructor parameter; you wouldn’t expect to be able to create an order in the paid state!).
  • Because of this your builder objects provide a blueprint / documentation for how to interact with your domain objects.
  • The test data builder can contain sensible defaults for your object instantiation and thus your test only needs to specify the values that need to be arranged thus making the intent of your test very clear
  • We have added a method to our builders (.AsProxy()) that allows the object returned to be an NSubstitute substitute (a proxy / mock object if you aren’t familiar with that library) rather than a real object (with the public properties set to automatically return the values specified in the builder). See below for a code snippet.
    • This means that when we need substitutes to check whether certain methods are called or to ensure that certain method calls return predetermined values we can generate those substitutes consistently with how we generate the real objects.
    • This is very powerful and improves understandability and maintainability of the tests.
    • In general we try and avoid using substitutes because it means that it’s possible to get objects that violate domain invariants, however, on occasion it’s very useful to properly unit test a domain action when it interacts with other domain objects

Here is a (somewhat contrived) example of the .AsProxy() method described above.

public void Given10YearMembership_WhenCalculatingDiscount_ThenApply15PercentLongMembershipDiscount() {
    var member = MemberBuilder.AsProxy().Build();
    member.GetYearsOfMembership(Arg.Any()).Returns(10);

    var discount = _discountCalculator.CalculateDiscountFor(member, new DateTimeProvider());

    Assert.That(discount, Is.EqualTo(0.15));
}

Combining NBuilder and the Test Data Builders

Another cool thing we have done is to combine our test data builders with NBuilder to allow for terse, expressive generation of lists of objects while still supporting objects that don’t have public property setters. We do this by generating a list of builders with NBuilder and then iterating that list to build each builder and get the actual object. Here is an example:

var members = Builder.CreateListOfSize(4)
    .TheFirst(1).With(b => b.WithFirstName("Rob"))
    .TheNext(2).With(b => b.WithFirstName("Poya"))
    .TheNext(1).With(b => b.WithFirstName("Matt"))
    .Build()
    .Select(b => b.Build());

We have made the above code even terser, by adding some extension methods to allow for this:

var members = MemberBuilder.CreateList(4)
    .TheFirst(1).With(b => b.WithFirstName("Rob"))
    .TheNext(2).With(b => b.WithFirstName("Poya"))
    .TheNext(1).With(b => b.WithFirstName("Matt"))
    .BuildList();

Adding back Object Mother

One of the really nice things about the Object Mother pattern is that you reduce a lot of repetition on your tests by re-using pre-canned objects. It also means the tests are a lot terser since you just specify the name of the pre-canned object that you want and assuming that name describes the pre-canned object well then your test is very readable.

What we have done is combined the maintainability and flexibility that is afforded by the Test Data Builder pattern with the terseness afforded by the Object Mother by using Object Mothers that return Test Data Builders. This overcomes all of the disadvantages noted above with the Object Mother pattern.

In order to make it easy to find objects we also use a static partial class for the Object Mother – this is one of the few really nice uses I’ve seen for partial classes. Basically, we have an ObjectMother folder / namespace in our test project that contains a file for every entity we are generating that includes a static nested class within the root ObjectMother class. This ensures that all test data is gotten by first typing “ObjectMother.” – this makes it really consistent and easily accessible, while not having an unmaintainable mess of a god object.

For instance we might have (simple example):

// Helpers\ObjectMother\Customers.cs
using MyProject.Tests.Helpers.Builders;

namespace MyProject.Tests.Helpers.ObjectMother
{
    public static partial class ObjectMother
    {
        public static class Customers
        {
            public static CustomerBuilder Robert
            {
                get { return new CustomerBuilder().WithFirstName("Robert"); }
            }
            public static CustomerBuilder Matt
            {
                get { return new CustomerBuilder().WithFirstName("Matt"); }
            }
        }
    }
}

If our test requires any small tweaks to the predefined object then you don’t need to necessarily add another property to your object mother – you can use the methods on the builder, e.g. ObjectMother.Customers.Robert.WithLastName("Moore").Build();

We generally stick to using properties rather than methods in the object mothers since the builders mean that it’s not necessary to pass any parameters to get your pre-canned object out, but occasionally we do have some that have methods when there is something meaningful to configure that makes sense in the object mother e.g. ObjectMother.Members.MemberWithXProducts(int numProducts).

Making the Test Data Builders terse

One argument against this approach, particularly when you first start out your test project (or indeed if you decide to refactor it to use this technique after the fact) is that you are writing a lot of code to get it all up and running. We decided to avoid that by using a number of techniques:

  • We set any reasonable defaults in our builder constructor so new CustomerBuilder().Build() will give a reasonable object (unless for that particular type of object there are any properties that make sense to always have to specify, in which case we don’t add a default for that property).
  • We have created a base class that allows us to set/get the property values into a dictionary using a lambda expression that identifies the property whose value is being set/retrieved – this reduces the code in the builder by eliminating the need for most (sometimes there is still a need to keep state in the builder where you are storing something not expressed by one of the properties) of the private variables in the builder.
  • We only add fluent methods for the properties we are actually using in our tests that point in time – this means we don’t have dead methods lying around in the builders and initially they are very terse.
  • We have a base class that defines a lot of the common infrastructure behind defining a builder (including the ability to return a proxy object and the ability to create a list of builders using NBuilder – I’ve open sourced the code as NTestDataBuilder).

This generally means that we can be up and running with an object mother and builder for a new entity within a minute and it’s well worth doing it from the start with all our objects to keep consistency in the codebase and ensure all tests are maintainable, terse and readable from the start.

Here is an example of what I mean from the NTestDataBuilder library I have released:

class CustomerBuilder : TestDataBuilder<Customer, CustomerBuilder>
{
    public CustomerBuilder()
    {
        WithFirstName("Rob");
        WithLastName("Moore");
        WhoJoinedIn(2013);
    }

    public CustomerBuilder WithFirstName(string firstName)
    {
        Set(x => x.FirstName, firstName);
        return this;
    }

    public CustomerBuilder WithLastName(string lastName)
    {
        Set(x => x.LastName, lastName);
        return this;
    }

    public CustomerBuilder WhoJoinedIn(int yearJoined)
    {
        Set(x => x.YearJoined, yearJoined);
        return this;
    }

    protected override Customer BuildObject()
    {
        return new Customer(
            Get(x => x.FirstName),
            Get(x => x.LastName),
            Get(x => x.YearJoined)
        );
    }
}

TestStack.Seleno 0.4 released

Lately I’ve been spending a fair bit of spare time working hard on getting the TestStack.Seleno project ready for a (rather massive!) 0.4 release. It’s been a lot of fun and I’m quite proud of the impact the core TestStack team and other contributors have made on the library. We feel that it is “production ready” now and will be moving towards a 1.0 release in the somewhat near future.

I won’t bother going into a huge amount of detail on the release because Michael Whelan has done that already, but I’ll list down the major changes here as a tl;dr:

  • Added a bunch of global configuration options:
    • Ability to specify a Castle.Core logger factory to intercept our internal logging
    • Ability to more easily specify a non-Firefox web browser
    • Ability to specify a deployed web application out-of-the-box
    • Ability to more explicitly specify your MVC routes
    • Ability to override the minimum (implicit) wait timeout inside of Selenium
  • Ability to explicitly specify the initial page for a test and initialise the first page object in one line of code
  • Continuous Integration support (now runs in TeamCity)
  • A new HTML Control model that provides a nice API on top of Selenium Web Driver to interact with HTML controls (including easy extensibility for your own controls)
  • A clearer public API
  • Improved test coverage and extensive refactoring of the core library code

You can get the latest version of Seleno on NuGet, or check out our GitHub repository for the latest source code and the getting started documentation. Let us know what you think, or if there are any features that you would like to see. Feel free to add an issue or pull request – the more community interaction we get the better we can make Seleno!