Announcing release of ChameleonForms 2.0.0 and new documentation site

I’m somewhat more subdued¬†with my excitement for announcing this than I was for 1.0. In fact I just had a chuckle to myself in re-reading that post ūüôā (oh and if you were wondering – did Matt and I enjoy Borderlands 2? Yes we very much did, it’s a great game).

Nonetheless, there is some really cool stuff in ChameleonForms 2.0 and I’m particularly excited about the new PartialFor functionality, which I will describe below. My peak excitement about PartialFor was months ago when the code was actually written, but Matt and I have had a particularly busy¬†second half of the year with our work roles expanding in scope and a healthy prioritisation of our personal lives so it took a while to get our act together and get the code merged and released.

There have been a range of point releases that added a bunch of functionality to ChameleonForms since the 1.0 release and before this 2.0 release. You can peruse the releases list to see the features.

New docs site

I’ve taken¬†the lead (as well as a bunch of advice – thanks mate) from Jake Ginnivan¬†and moved the documentation for¬†ChameleonForms to Read the Docs. The new documentation site is now generated from files in the source repository’s docs folder. This is awesome because it means that the documentation can be tied to current state of the¬†software – no more documentation that is ahead or behind and pull requests can now contain documentation changes corresponding to the code changes.

For those who are curious the process I followed to migrate from GitHub wiki to Read the Docs was:

  1. Clone the wiki
  2. Move all the files into the docs folder of the repository
  3. Add a mkdocs.yml file to the root of the repository with all of the files¬†(this means I need to keep a list of the files in there, but I don’t mind since it gives me control of the menu, you can omit the mkdocs.yml file if you want and it¬†alphabetically places all of the files in the menu)
  4. Sign up for Read the Docs and create a new project linked to the GitHub repository
  5. Enable the fenced code markdown extension
  6. Change all internal documentation links to reference the .md file (in my case I had to search for all links to wiki/* and remove the wiki/ and add in the .md)
  7. Change any occurrences of ```c# with ```csharp (GitHub supports using c# for the fenced code snippet, but mkdocs doesn’t)
  8. Check all of the pages since some of them might render weirdly – I had to add some extra spaces between paragraphs and code blocks / bullet lists for instance since the markdown parser is slightly different

There are a bunch of different formats that give more flexibility that Read the Docs supports (e.g. restructured text), but I’m very happy with the markdown support.

2.0 minor features and bug fixes

Check out the release notes for the 2.0 release to see a bunch of minor new features and bug fixes that have been contributed by a bunch of different people Рthanks to everyone that contributed! It always give Matt and I a rush when we receive a pull request from someone :).

PartialFor feature

This is the big feature. A few breaking changes went into the 2.0 release in order to make this possible. This is the first of the extensibility features we have added to ChameleonForms.

Essentially, it allows us to contain a part of a form in a partial view, with full type-safety and intellisense. The form can be included directly against a form or inside a form section. This makes things like common parts of forms for create vs edit screens possible. This allows you to remove even more repetition in your forms, while keeping a clean separation between forms that are actually separate.

The best way to see the power of the feature in it’s glory is by glancing over the acceptance test for it.¬†The output should be fairly self explanatory.

There is also a documentation page on the feature,

Is ChameleonForms still relevant?

We were very lucky to be included in Scott Hanselman’s NuGet package of the week¬†earlier this year. The comments of Scott’s post are very interesting because it seems our library is somewhat¬†controversial. A lot of people are saying that single page applications and the increasing prevalence of JavaScript make creating forms in ASP.NET MVC redundant.

Matt and I have spent a lot more time in JavaScript land than MVC of late and we concede that there is certainly a lot more scenarios now that don’t make sense to break out MVC. That means ChameleonForms isn’t as relevant as when we first started developing it.

In saying that, we still firmly believe that there are a range of scenarios that MVC is very much appropriate for. Where you don’t need the flexibility of an API and/or you need pure speed of development (in particular developing prototypes) and/or you’re building CRUD applications or heavily forms-based applications (especially where you need consistency of your forms) we believe MVC + ChameleonForms is very much a good¬†choice and often is the best¬†choice.

Recent talks

I recently gave a couple of conference talks:

  • 2015 Yow! West conference: Joint presentation¬†with Matt Davies¬†on Microtesting
    • Do you want to write less tests for the same amount of confidence?

      Do you want to print out the testing pyramid on a dot matrix printer, take it outside and set fire to it?

      How confident are you that you can survive the refactoring apocalypse without breaking your tests?

      As consultants, we get to see how testing is performed across many different organisations and we have a chance to experiment with different testing strategies across multiple projects. Through this experience, we have developed a pragmatic process for setting an initial testing strategy that is as simple as possible and iterating on that strategy over time to evolve it based on how it performs. We have also settled on a style of testing that has proved to be very effective at reducing testing effort while maintaining (or even improving) confidence from our tests.

      This talk will focus on some of our learnings and we will cover the different types of testing and how they interact, breaking apart the usual practice of testing all applications in the same way, the mysterious relationship between speed and confidence, how we were able to throw away the testing pyramid and a number of techniques that have worked well for us when testing our applications.

    • Slides published to GitHub
    • There will be a video, I’ll link to it from the GitHub when it’s published
  • 2015 ANZ Coders virtual conference: Presentation on Applying useful testing patterns¬†using TestStack.Dossier
    • The Object Mother, Test Data Builder, Anonymous Variable/Value, equivalence class and constrained non-determinism patterns/concepts can help you make your tests more readable/meaningful, more terse and more maintainable when used in the right way.

      This talk will explain why and where the aforementioned patterns are useful and the advantages they can bring and show examples in code using a library I recently released called TestStack.Dossier.

    • Slides published to GitHub
    • Video

Azure Resource Manager intro presentation and workshop

I attended the Azure Saturday event here in Perth last weekend. Matt and I did a basic intro presentation on Azure Resource Manager and ran an associated workshop, which we have published to our GitHub organisation.

Azure Resource Manager is one of the most important things to understand about Azure if you plan on using it since it’s the platform that¬†underpins the provisioning and management of all resources in Azure going forward.

Azure Saturday Perth 2015 presentation

Whitepaper: Managing Database Schemas in a Continuous Delivery World

A whitepaper I wrote for my employer, Readify, just got published. Feel free to check it out. I’ve included the abstract below.

One of the trickier technical problems to address when moving to a continuous delivery development model is managing database schema changes. It’s much harder to to roll back or roll forward database changes compared to software changes since by definition it has state. Typically, organisations address this problem by having database administrators (DBAs) manually apply changes so they can manually correct any problems, but this has the downside of providing a bottleneck to deploying changes to production and also introduces human error as a factor.

A large part of continuous delivery involves the setup of a largely automated deployment pipeline that increases confidence and reduces risk by ensuring that software changes are deployed consistently to each environment (e.g. dev, test, prod).

To fit in with that model it’s important to automate database changes so that they are applied automatically and consistently to each environment thus increasing the likelihood of problems being found early and reducing the risk associated with database changes.

This report outlines an approach to managing database schema changes that is compatible with a continuous delivery development model, the reasons why the approach is important and some of the considerations that need to be made when taking this approach.

The approaches discussed in this document aren’t specific to continuous delivery and in fact should be considered regardless of your development model.

Unit, integration, subcutaneous, UI, fast, slow, mocks, TDD, isolation and scams… What is this? I don’t even!

As outlined in the first post of my Automated Testing blog series I’ve been on a journey of self reflection and discovery about how best to write, structure and maintain automated tests.

The most confusing and profound realisations that I’ve had relate to how best to cover a codebase in tests and what type and speed those tests should be. The sorts of questions and considerations that come to mind about this are:

  • Should I be writing unit, subcutaneous, integration, etc. tests to cover a particular piece of code?
  • What is a unit test anyway? Everyone seems to have a different definition!
  • How do I get feedback as fast as possible – reducing feedback loops is incredibly important.
  • How much time/effort are we prepared to spend testing our software and what level of coverage do we need in return?
  • How do I keep my tests maintainable and how do I reduce the number of tests that break when I need to make a change to the codebase?
  • How do I make sure that my tests give me the maximum confidence that when the code is shipped to production it will work?
  • When should I be mocking the database, filesystem etc.
  • How do I ensure that my application is tested consistently?

In order to answer these questions and more I’ve watched a range of videos and read a number of blog posts from prominent people, spent time experimenting and reflecting on the techniques via the projects I work on (both professionally and with my Open Source Software work) and tried to draw my own conclusions.

There are some notable videos that I’ve come across that, in particular, have helped me with my learning and realisations so I’ve created a series of posts around them (and might add to it over time if I find other posts). I’ve tried to summarise the main points I found interesting from the material as well as injecting my own thoughts and experience where relevant.

There is a great talk by Gary Bernhardt called Boundaries. For completeness, it is worth looking at in relation to the topics discussed in the above articles. I¬†don’t have much to say about this yet (I’m still getting my head around where it fits in) apart from the fact that code that maps input(s) to output(s) without side effects are obviously very easy to test and I’ve found that where I have used immutable value objects in my domain model it has made testing easier.

Summary

I will summarise my current thoughts (this might change over time) by revisiting the questions I posed above:

  • Should I be writing unit, subcutaneous, integration, etc. tests to cover a particular piece of code?
    • Typical consultant answer: it depends. In general I’d say write the fastest possible test you can that gives you the minimum required¬†confidence and¬†bakes in the minimum amount of implementation details.
    • I’ve had a lot of luck covering line-of-business web apps with mostly subcutaneous tests against the MVC controllers, with a smattering of unit tests to check conventions and test really complex logic and I typically¬†see how far I can get without writing UI tests, but when I do I test high-value scenarios or complex UIs.
  • What is a unit test anyway? Everyone seems to have a different definition!
  • How do I get feedback as fast as possible – reducing feedback loops is incredibly important.
    • Follow Jimmy’s advice and focus on writing as many tests that are as fast as possible rather than worrying about whether a test is a unit test or integration test.
    • Be pragmmatic though, you might get adequate speed, but a higher level of confidence by integrating your tests with the database for instance (this has worked well for me)
  • How much time/effort are we prepared to spend testing our software and what level of coverage do we need in return?
    • I think it depends on the application – the product owner, users and business in general will all have different tolerances for risk of something going wrong. Do the minimum amount that’s needed to get the amount of confidence that is required.
    • In general I try and following the mantra of “challenge yourself to start simple then inspect and adapt” (thanks Jess for helping refine that). Start off with the simplest testing approach that will work and if you find you are spending too long writing tests or the tests don’t give you the right confidence then adjust from there.
  • How do I keep my tests maintainable and how do I reduce the number of tests that break when I need to make a change to the codebase?
    • Focus on removing implementation details from tests. Be comfortable testing multiple classes in a single test (use your production DI container!).
    • Structure the tests according to user behaviour – they are less likely to have implementation details and they form better documentation of the system.
  • How do I make sure that my tests give me the maximum confidence that when the code is shipped to production it will work?
    • Reduce the amount of mocking you use to the bare minimum – hopefully just things external to your application so that you are testing production-like code paths.
    • Subcutaneous tests are a very good middle ground between low-level implementation-focused unit tests and slow and brittle UI tests.
  • When should I be mocking the database, filesystem etc.
    • When you need the speed and are happy to forgo the lower confidence.
    • Also, if they are external to your application or not completely under your application’s control e.g. a database that is touched by multiple apps and your app doesn’t run migrations on it and control the schema.
  • How do I ensure that my application is tested consistently?
    • Come up with a testing strategy and stick with it. Adjust it over time as you learn new things though.
    • Don’t be¬†afraid to use different styles of test as appropriate – e.g. the bulk of tests might be subcutaneous, but you might decide to write lower level unit tests for complex logic.

In closing, I wanted to show a couple of quotes that I think are relevant:

Fellow Readifarian, Kahne Raja recently said this on an internal Yammer discussion and I really identify with it:

“We should think about our test projects like we think about our solution projects. They involve complex design patterns and regular refactoring.”

Another Readifarian, Pawel Pabich , made the important point that:

“[The tests you write]¬†depend[s] on the app you are writing. [A] CRUD Web app might require different tests than a calculator.”

I also like this quote from Kent Beck:

“I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.”

Developing ASP.NET web applications with IIS

When you File -> New Project an ASP.NET application in Visual Studio and then F5 by default it will spin up IIS Express and navigate to the site for you.

IIS Express is pretty cool – it runs under your user account so no need to mess around with elevated privileges, it has most of the power of IIS (think web.config) and it “just works” out of the box without extra configuration needed across all dev machines ūüôā

For projects that I work on every day though, I really dislike using IIS Express as a development server for the following reasons:

  • It will randomly crash
  • When it crashes I have to F5 or Ctrl+F5 in Visual Studio to restart it – I can’t just go to the last url it was deployed to (e.g. http://localhost:port/)
  • If your code has an uncaught exception then a¬†crash dialog pops up in your taskbar in a way that isn’t obvious and requires you to click a button until the code continues running (this can be very confusing)
  • Setting up a custom domain is tricky to do, is a tedious manual process and can’t¬†run on port 80 side-by-side with proper IIS
    • Using a custom domain is often¬†essential too – think sharing cookies between domains or¬†performing something like integrating with third parties where you need to provide a URL other than localhost

Part of the reason IIS Express exists is because setting up IIS with a site is not a trivial process. However, when you do eventually get it set up I usually find it works great from then on:

  • It’s stable
  • The URL is always available – you don’t have to use Visual Studio at all
  • Uncaught exceptions¬†behave as expected
  • Custom domains are easy in both the IIS¬†Manager GUI and via a variety of commandline options

In order to reduce the pain involved with setting up IIS I do two things:

  • I modify my¬†Visual Studio taskbar icon to always run as admin (necessary to open a project bound to IIS)
  • I add a Developer Setup script to the project that¬†developers must run once when they first clone the repository that sets up everything up for them in a matter of seconds (hopefully giving the same Open Solution -> F5 and start developinging experience)
    • I’ve added an example of such a script to a Gist¬†– the script also includes setting up the hosts file and a SQL Express database
    • I can’t claim full credit for the script – it’s been a collaborative effort over a number of projects by all of the Readifarians I’ve worked with ūüėÄ

GitVersion TeamCity MetaRunner

I’ve blogged previously about using GitHubFlowVersion for versioning and how I created a TeamCity meta-runner for it.

A lot has happened since then in that space and that has been nicely summarised by my friend Jake Ginnivan. tl;dr GitHubFlow version has been merged with the GitFlowVersion project to form GitVersion.

This project is totally awesome and I highly recommend that you use it. In short:

GitVersion uses your git repository branching conventions to determine the current Semantic Version of your application. It supports GitFlow and the much simpler GitHubFlow.

I’ve gone ahead and developed a much more comprehensive¬†TeamCity meta-runner for GitVersion and I’ve submitted it to the TeamCity meta-runner PowerPack. This meta-runner allows you to use GitVersion without needing to¬†install any binaries on your build server or your source repository – it automatically downloads it from Chocolatey ūüôā

Happy building!

Announcing repave.psm1

So after 18 months of not repaving my machine and occasionally (especially lately) having to deal with the machine filling up and slowing down I’m finally at the point where it’s time to repave. I wanted to do it ages ago, but I avoided it because of how painful it is to do.

This time around I’ve decided to bite the bullet and finally do something I’ve been meaning to do all along – create a script to make it much easier / quicker as well as form documentation about what programs / what setup I want for my machine.

I’ve been interested in Chocolatey and Boxstarter for ages to do this very thing. In this instance I didn’t bother using Boxstarter since I didn’t have any restarts in there, but I encourage people to look into it particularly if doing VM install scripts – it’s AMAZING.

I started writing this crazy PowerShell script to automate all the installs and settings I wanted and eventually I refactored it until it became like this. I think it’s really readable and maintainable and acts really well as documentation for myself.

While developing it I initially had a bunch of cinst calls, but the problem with that is each call incurs a 2s startup cost for some reason – this made developing it painful. In order to develop the script incrementally (I was doing it inside of a VM so I could trash it and start from the beginning again) I wanted three things:

  • Speed (if something is already installed I want it to skip it instantly, not wait for cinst to spin up for 2s)
  • Idempotency (I want to run and re-run the script again and again and again after making small changes to see their effect)
  • Fail fast (if something is wrong I want it to just fail and print an error so I can see what happened – I don’t want it to continue trying to install other things that might be dependent on the thing that failed)

I managed to achieve all of that and the other advantage I see in this approach is that it makes it really easy for me to reuse the script as an update mechanism if I decide to change things between re-paves. This is awesome and I think makes the script way more useful.

Long-story short: I’ve abstracted all of the main functionality into a PowerShell module and open-sourced it as repave.psm1 on GitHub. Check it out and feel free to fork it to create your own scripts and submit back a pull request with any fixes or additions.

It’s a bit rough around the edges since I’ve knocked it up in a hurry this weekend, but I did put in some initial documentation to describe all the functionality and there are two example scripts in there that use it.

Enjoy!