Automating Azure Resource Manager

I’ve recently been (finally) getting to speed with Azure Resource Manager (ARM). It’s the management layer that drives the new Azure Portal and also features like Resource Groups and Role-Based Access Control.

You can interact with ARM in a number of ways:

To authenticate to the ARM API you need to use an Azure AD credential. This is all well and good if you are logged into the Portal, or running a script on your computer (where a web browser login prompt to Azure AD will pop up), but when automating your API calls that’s not available.

Luckily there is a post by David Ebbo that describes how to generate a Service Principal (equivalent of the concept of an Active Directory Service Account) attached to an Azure AD application.

The only problem with this post is that there are a few manual steps and it’s quite fiddly to do (by David’s own admission). I’ve developed a PowerShell module that you can use to idempotently create a Service Principal against either an entire Azure subscription or against a specific Resource Group that you can then use to automate your ARM code.

I’ve published the code to GitHub.

In order to use it you need to:

  1. Ensure you have the Windows Azure PowerShell commandlets installed
  2. Download the Set-ARMServicePrincipalCredential.psm1 file from my GitHub repository
  3. Download the Azure Key Vault PowerShell commandlets and put the AADGraph.ps1 file next to the file from GitHub
  4. Execute the Set-ARMServicePrincipalCredential command as per the examples on GitHub

This will pop up a web browser prompt to authenticate (this will happen twice since I’m using two disjointed libraries – hopefully this will get resolved if Azure AD commandlets end up becoming integrated with the Azure Commandlets) give you the following information:

  • Tenant ID
  • Client ID
  • Password

From there you have all the information you need to authenticate your automated script with ARM.

If using PowerShell then this will look like:

    $securePassword = ConvertTo-SecureString $Password -AsPlainText -Force
    $servicePrincipalCredentials = New-Object System.Management.Automation.PSCredential ($ClientId, $securePassword)
    Add-AzureAccount -ServicePrincipal -Tenant $TenantId -Credential $servicePrincipalCredentials | Out-Null

If using ARMClient then this will look like:

    armclient spn $TenantId $ClientId $Password | Out-Null

One last note: make sure you store the password securely when automating the script, e.g. TeamCity password, Bamboo password or Octopus sensitive variable.

Deployment and Development using PhoneGap Build for a Cordova/PhoneGap app

I recently worked on a project to build a mobile app in iOS and Android. The team decided to use Cordova and Ionic Framework to build it. I highly recommend Ionic – it’s an amazingly productive and feature-rich framework with a really responsive development team! It enabled us to use AngularJS (and the application structure and maintenance advantages that affords) and really sped up the development time in a number of ways. We were also able to get an almost-native performance experience for the application using features like collection repeat (virtualised list) as well as the in-built performance tuning the Ionic team have performed.

When it was time to set up an automated build pipeline we settled on using PhoneGap Build and its API which we then called from OctopusDeploy as part of a NuGet package we created.

PhoneGap Build is Adobe’s cloud build service that can build iOS, Android, Windows Phone etc. apps for you based on a zip file with your Cordova/PhoneGap files.

We decided on PhoneGap Build because:

  • It has been around a while (i.e. it’s relatively mature)
  • It’s backed by a big name (Adobe)
  • It has a decent API
  • It meant we didn’t have to set up a Mac server, which would have been a pain
  • It’s free for a single project 🙂

This post covers how we set up our automated deployment pipeline for our native apps using OctopusDeploy and PhoneGap Build. I have also published a companion post that covers everything you need to know about PhoneGap Build (read: what I wish I knew before I started using PhoneGap Build because it’s darn confusing!).

Deployment pipeline

This is the deployment pipeline we ended up with:

  1. Developer commits a change, submits a pull request and that pull request gets merged
  2. Our TeamCity continuous integration server compiles the server-side code, runs server-side code tests and runs client-side (app) tests
  3. If all is well then a NuGet package is created for the server-side code and a NuGet package is created for the app – both are pushed to OctopusDeploy and the server-side code is deployed to a staging environment (including a web server version of the app)
    • Native functionality wouldn’t work, but you could access and use the bulk of the app to test it out
    • To test native functionality developers could easily manually provision a staging version of the app if they plugged a phone into their computer via a USB cable and executed a particular script
  4. The product owner logs into OctopusDeploy and promotes the app to production from staging whereupon OctopusDeploy would:
    • Deploy the server-side code to the production environment
    • Use the PhoneGap Build API to build an iOS version with an over-the-air compatible provisioning profile
    • Download the over-the-air .ipa
      • Upload this .ipa to Azure blob storage
      • Beta testers could then download the latest version by visiting a html page linking to the package (see below for details)
    • Build the production Android and iOS versions of the app using the PhoneGap Build API
  5. The product owner then logs on to the PhoneGap Build site, downloads the .ipa / .apk files and uploads them to the respective app stores

End result

I personally wasn’t completely satisfied with this because I didn’t like the manual step at the end, however, apparently app store deployments are stuck in the 90s so unless you want to screen-scrape the respective app stores you have to manually log in and upload the deployment file. Regardless, the end result was good:

  • Production deployments were the same binaries/content that were tested on the CI server and that had been successfully deployed and manually tested in the staging environment
  • We could go from developer commit to production in a few minutes (ignoring app store approval/roll-out times; remember – stuck in the 90s)
  • The product owner had complete control to deploy any version of the app he wanted to and it was easy for him to do so

How we did it

I have published the NodeJS script we used to automate the PhoneGap Build API to a Gist. It contains a very reusable wrapper around a PhoneGap Build API NodeJS library that makes it dead easy to write your own workflow in a very succinct manner.

To generate the zip file the NodeJS script used we simply grabbed the www directory (including config.xml) from our repository and zipped it up. That zip file and the NodeJS scripts went into the NuGet package that OctopusDeploy deployed when the app was promoted to production (we made this happen on a Tentacle residing on the OctopusDeploy server). I have published how we generated that NuGet package to a Gist as well (including how to set up over-the-air).

Further notes

Some notes about our approach:

  • Because we were able to make our backend apis deploy to production at the same time as generating our production app packages we were assured that the correct version of the backend was always available when the apps were deployed to the respective app stores.
  • It was important for us to make sure that our backend api didn’t have any breaking changes otherwise existing app deployments (and any lingering ones where people haven’t updated) would break – usual API management/versioning practices apply.
  • Whenever we wanted to add new Beta testers the provisioning profile for over-the-air needed to be re-uploaded to PhoneGap Build
    • Unfortunately, PhoneGap Build requires you upload this alongside the private key – thus requiring someone with access to PhoneGap Build, the private key and the iOS developer account – it’s actually quite a convoluted process so I’ve also published the instructions we provided to the person responsible for managing the Beta testing group to a Gist to illustrate
  • We stored the passwords for the Android Keystore and iOS private keys as sensitive variables in OctopusDeploy so nobody needed to ever see them after they had been generated and stored in a password safe
    • This meant the only secret that was ever exposed on a regular basis was the password-protected over-the-air private key whenever a Beta tester was added

TestFlight

While we didn’t use it, it’s worth pointing out TestFlight. It’s a free service that gives you the ability to more easily keep track of a list of testers, their UUIDs, informing them of updates and automatically creating and hosting the necessary files for over-the-air. It contains an API that allows you to upload your .ipa to and it should be simple enough to take the scripts I published above and upload a .ipa to TestFlight rather than Azure Blob Storage if you are interested in doing that.

Final Thoughts

PhoneGap Build is a useful and powerful service. As outlined in my companion post, it’s very different to Cordova/PhoneGap locally and this, combined with confusing documentation, resulted in me growing a strong dislike of PhoneGap Build while using it.

In writing my companion post however, I’ve been forced to understand it a lot more and I now think that as long as you have weighed up the pros and cons and still feel it’s the right fit then it’s a useful service and shouldn’t be discounted outright.

Developing ASP.NET web applications with IIS

When you File -> New Project an ASP.NET application in Visual Studio and then F5 by default it will spin up IIS Express and navigate to the site for you.

IIS Express is pretty cool – it runs under your user account so no need to mess around with elevated privileges, it has most of the power of IIS (think web.config) and it “just works” out of the box without extra configuration needed across all dev machines 🙂

For projects that I work on every day though, I really dislike using IIS Express as a development server for the following reasons:

  • It will randomly crash
  • When it crashes I have to F5 or Ctrl+F5 in Visual Studio to restart it – I can’t just go to the last url it was deployed to (e.g. http://localhost:port/)
  • If your code has an uncaught exception then a crash dialog pops up in your taskbar in a way that isn’t obvious and requires you to click a button until the code continues running (this can be very confusing)
  • Setting up a custom domain is tricky to do, is a tedious manual process and can’t run on port 80 side-by-side with proper IIS
    • Using a custom domain is often essential too – think sharing cookies between domains or performing something like integrating with third parties where you need to provide a URL other than localhost

Part of the reason IIS Express exists is because setting up IIS with a site is not a trivial process. However, when you do eventually get it set up I usually find it works great from then on:

  • It’s stable
  • The URL is always available – you don’t have to use Visual Studio at all
  • Uncaught exceptions behave as expected
  • Custom domains are easy in both the IIS Manager GUI and via a variety of commandline options

In order to reduce the pain involved with setting up IIS I do two things:

  • I modify my Visual Studio taskbar icon to always run as admin (necessary to open a project bound to IIS)
  • I add a Developer Setup script to the project that developers must run once when they first clone the repository that sets up everything up for them in a matter of seconds (hopefully giving the same Open Solution -> F5 and start developinging experience)
    • I’ve added an example of such a script to a Gist – the script also includes setting up the hosts file and a SQL Express database
    • I can’t claim full credit for the script – it’s been a collaborative effort over a number of projects by all of the Readifarians I’ve worked with 😀

Scripted/Automated installation script to set up Cordova/PhoneGap and Android on Windows

I recently worked on a Cordova project and one of the things we found is that it’s an absolute pain to set up a development environment since there is a whole bunch of tools that need to be downloaded and installed and configured in specific ways.

We ended up creating a page in our project’s OneNote notebook with developer setup instructions, but even though we were using Chocolatey it was still a tedious process with numerous console restarts to refresh environment variables (that had to be manually set).

In the process of writing a post on Cordova I wanted to check something and realised I had repaved my machine since the last time I installed the Android SDK / Cordova etc.

I consulted the OneNote page we had created and looked in despair at the instructions. What a PITA! So what did I do?

I spun up a Windows Azure VM and stumbled through creating a PowerShell script to automate the setup. Then I spun up a second VM to check that the script worked :). Then I deleted both of them – probably cost a few cents and the servers had a really fast download speed so the installations were really quick. God I love the cloud 😀

I’ve uploaded it to a Gist. If you are setting up a PhoneGap/Cordova & Android development environment then I’m sure it will be useful to you.

Enjoy!

GitVersion TeamCity MetaRunner

I’ve blogged previously about using GitHubFlowVersion for versioning and how I created a TeamCity meta-runner for it.

A lot has happened since then in that space and that has been nicely summarised by my friend Jake Ginnivan. tl;dr GitHubFlow version has been merged with the GitFlowVersion project to form GitVersion.

This project is totally awesome and I highly recommend that you use it. In short:

GitVersion uses your git repository branching conventions to determine the current Semantic Version of your application. It supports GitFlow and the much simpler GitHubFlow.

I’ve gone ahead and developed a much more comprehensive TeamCity meta-runner for GitVersion and I’ve submitted it to the TeamCity meta-runner PowerPack. This meta-runner allows you to use GitVersion without needing to install any binaries on your build server or your source repository – it automatically downloads it from Chocolatey 🙂

Happy building!

Announcing repave.psm1

So after 18 months of not repaving my machine and occasionally (especially lately) having to deal with the machine filling up and slowing down I’m finally at the point where it’s time to repave. I wanted to do it ages ago, but I avoided it because of how painful it is to do.

This time around I’ve decided to bite the bullet and finally do something I’ve been meaning to do all along – create a script to make it much easier / quicker as well as form documentation about what programs / what setup I want for my machine.

I’ve been interested in Chocolatey and Boxstarter for ages to do this very thing. In this instance I didn’t bother using Boxstarter since I didn’t have any restarts in there, but I encourage people to look into it particularly if doing VM install scripts – it’s AMAZING.

I started writing this crazy PowerShell script to automate all the installs and settings I wanted and eventually I refactored it until it became like this. I think it’s really readable and maintainable and acts really well as documentation for myself.

While developing it I initially had a bunch of cinst calls, but the problem with that is each call incurs a 2s startup cost for some reason – this made developing it painful. In order to develop the script incrementally (I was doing it inside of a VM so I could trash it and start from the beginning again) I wanted three things:

  • Speed (if something is already installed I want it to skip it instantly, not wait for cinst to spin up for 2s)
  • Idempotency (I want to run and re-run the script again and again and again after making small changes to see their effect)
  • Fail fast (if something is wrong I want it to just fail and print an error so I can see what happened – I don’t want it to continue trying to install other things that might be dependent on the thing that failed)

I managed to achieve all of that and the other advantage I see in this approach is that it makes it really easy for me to reuse the script as an update mechanism if I decide to change things between re-paves. This is awesome and I think makes the script way more useful.

Long-story short: I’ve abstracted all of the main functionality into a PowerShell module and open-sourced it as repave.psm1 on GitHub. Check it out and feel free to fork it to create your own scripts and submit back a pull request with any fixes or additions.

It’s a bit rough around the edges since I’ve knocked it up in a hurry this weekend, but I did put in some initial documentation to describe all the functionality and there are two example scripts in there that use it.

Enjoy!

Announcing AzureWebFarm.OctopusDeploy

I’m very proud to announce the public 1.0 release of a new project that I’ve been working on with Matt Davies over the last couple of weeks – AzureWebFarm.OctopusDeploy.

This project allows you to easily create an infinitely-scalable farm of IIS 8 / Windows Server 2012 web servers using Windows Azure Web Roles that are deployed to by an OctopusDeploy server.

If you haven’t used OctopusDeploy before then head over to the homepage and check it out now because it’s AMAZING.

The installation instructions are on the AzureWebFarm.OctopusDeploy homepage (including a screencast!), but in short it amounts to:

  1. Configure a standard Web Role project in Visual Studio
  2. Install-Package AzureWebFarm.OctopusDeploy
  3. Configure 4 cloud service variables – OctopusServerOctopusApiKeyTentacleEnvironment and TentacleRole
  4. Deploy to Azure and watch the magic happen!

We also have a really cool logo that a friend of ours, Aoife Doyle, drew and graciously let us use!AzureWebFarm.OctopusDeploy logo

 

It’s been a hell of a lot of fun developing this project as it’s not only been very technically challenging, but the end result is just plain cool! In particular the install.ps1 file for the NuGet package was very fun to write and results in a seamless installation experience!

Also, a big thanks to Nicholas Blumhardt, who gave me some assistance for a few difficulties I had with Octopus and implemented a new feature I needed really quickly!

Test Harness for NuGet install PowerShell scripts (init.ps1, install.ps1, uninstall.ps1)

One thing that I find frustrating when creating NuGet packages is the debug experience when it comes to creating the PowerShell install scripts (init.ps1, install.ps1, uninstall.ps1).

In order to make it easier to do the debugging I’ve created a test harness Visual Studio solution that allows you to make changes to a file, compile the solution, run a single command in the package manager and then have the package uninstall and then install again. That way you can change a line of code, do a few key strokes and then see the result straight away.

To see the code you can head to the GitHub repository. The basic instructions are on the readme:

  1. [Once off] Checkout the code
  2. [Once off] Create a NuGet source directory in checkout directory
  3. Repeat in a loop:
    1. Write the code (the structure of the solution is the structure of your nuget package, so put the appropriately named .ps1 scripts in the tools folder)
    2. Compile the solution (this creates a nuget package in the root of the solution with the name Package and version {yymm}.{ddHH}.{mmss}.nupkg – this means that the package version will increase with time so if you install from that directory it will always install the latest build) <F6> or <Ctrl+Shift+B)
    3. Switch to the Package Manager Console <Ctrl+P, Ctrl+M>
    4. [First time] Uninstall-Package Package; Install-Package Package <enter> / [Subsequent times] <Up arrow> <enter>
  4. When done simply copy the relevant files out and reset master to get a clean slate

Other handy hints

Maintainable, large-scale continuous delivery with TeamCity Blog Series

I’ve been a ardent supporter of continuous delivery since I first learnt about it from a presentation by Martin Fowler and Jez Humble. At the time I loved how it encouraged less risky deployments and changed the decision of when/what to deploy from being a technical decision to being a business decision.

I personally think that embracing continuous delivery is an important intermediate step on the journey towards moving from technical agility to strategic agility.

This post was first written in August 2012, but has since been largely rewritten in February 2014 to keep it up to date.

This post forms the introduction to a blog series that is jointly written by myself and Matt Davies.

  1. Intro
  2. TeamCity deployment pipeline
  3. Deploying Web Applications
    • MsDeploy (onprem and Azure Web Sites)
    • OctopusDeploy (nuget)
    • Git push (Windows Azure Web Sites)
  4. Deploying Windows Services
    • MsDeploy
    • OctopusDeploy
    • Git push (Windows Azure Web Sites Web Jobs)
  5. Deploying Windows Azure Cloud Services
    • OctopusDeploy
    • PowerShell
  6. How to choose your deployment technology

Continuous Delivery with TeamCity

One of the key concepts in continuous delivery is the creation of a clear deployment pipeline that provides a clear set of sequential steps or stages to move software from a developer’s commits to being deployed in production.

We have been using TeamCity to develop deployment pipelines that facilitate continuous delivery for the last few years. While TeamCity is principally a continuous integration tool, it is more than adequate for creating a deployment pipeline and comes with the advantage that you are then using a single tool for build and deployment.

We have also tried combining TeamCity with other tools that are more dedicated to deployments, such as OctopusDeploy. Those tools provide better deployment focussed features such as visualisation of the versions of your application deployed to each environment. This approach does create the disadvantage of needing to rely on configuring and using two separate tools rather than just one, but can still be useful depending on your situation.

There are a number of articles that you will quickly come across in this space that give some really great advice on how to set up a continuous delivery pipeline with TeamCity and complement our blog series:

Purpose of this blog series

The purpose of this series is three-fold:

  • Document any findings that myself and Matt Davies have found from implementing continuous delivery pipelines using TeamCity that differ from the articles above;
  • Outline the techniques we have developed to specifically set up the TeamCity installation in a way that is maintainable for a large number of projects;
  • Cover the deployment of Windows Services and Azure Roles as well as IIS websites.

Our intention is to build of the work of our predecessors rather than provide a stand-alone series on how to set up continuous delivery pipelines with TeamCity.

Other options

TeamCity is by no means the only choice when it comes to creating a deployment pipeline. Feel free to explore some of the other options:

… A year later

It seems rather funny that it was exactly one year since I’ve done a post on my blog. Usual story of course, I started out with good intentions to regularly blog about all the cool stuff I discover along my journey, but time got the better of me. I guess they were the same intentions that I originally had to skin this blog :S.

One thing I have learnt over the last year is that prioritisation is one of the most important things you can do and abide by both personally and professionally. No matter what there will never be enough time to do all the things that you need and want to do so you just have to prioritise and get done all you can – what more can you ask of yourself. With that in mind I guess I haven’t prioritised my blog 😛

I really respect people that manage to keep up with regular blog posts as well as full-time work and other activities. I find that writing blog posts is really time consuming because the pedantic perfectionist in me strives to get every relevant little detail in there and ensure it’s all formatted correctly. Combining that with the insane number of things I seem to find myself doing and trying to get some relax time in somewhere isn’t terribly conducive. It’s a pity really because I enjoy writing posts and hopefully I contribute some useful information here and there.

So, that aside, what have I been doing for the last year. If you are interested feel free to peruse the below list, which has some of what I’ve been doing and is written in no particular order; it’s really just a brain dump ^^. There are a few posts that I have been intending on writing along the way with particularly interesting (to me at least) topics so I’ll try and write some posts over the next few days 🙂

  • Worked with out Project Management Office at work to come up with a way to use PRINCE 2 to provide high level project management to our Agile projects without impacting on the daily work that the teams perform under Scrum. Despite my early scepticism about PRINCE 2 it’s actually a really impressive and flexible project management framework and has worked well.
  • Learnt PowerShell – it’s amazing!
  • Wrote some interesting / powerful NuGet packages (not public I’m afraid) using PowerShell install scripts
  • Attended a really great conference
  • Discovered and started living and breathing (and evangelising) continuous delivery and dev ops
  • Started thinking about the concept of continuous design as presented by Mary Poppendieck at Yow
  • Created a continuous delivery pipeline for a side-project with a final prod deployment to Windows Azure controlled by the product owner at the click of a button with a 30-45s deployment time!
  • Started learning about the value of Lean thinking, in particular with operational teams
  • Started evangelising lean thinking to management and other teams at work (both software and non-software)
  • Started using Trello to organise pretty much everything (both for my team, myself personally and at work and various projects I’m working on in and out of work) – it’s AMAZING.
  • Delivered a number of interesting / technically challenging projects
  • Became a manager
  • Assisted my team to embark on the biggest project we’ve done to date
  • Joined a start-up company based in Melbourne in my spare time
  • Joined Linked in (lol; I guess it had to finally happen)
  • Gave a number of presentations
  • Became somewhat proficient in MSBuild (*shudders*) and XDT
  • Facilitated countless retrospectives including a few virtual retrospectives (ahh Trello, what would I do without you)
  • Consolidated my love for pretty much everything Jetbrains produce for .NET (in particular TeamCity 7 and ReSharper 6 are insanely good, I’ll forgive them for dotCover)
  • Met Martin Fowler and Mary and Tom Poppendieck
  • Participated in the global day of code retreat and then ran one for my team (along with a couple of Fedex days)
  • Got really frustrated with 2GB of RAM on my 3 year old computer at home after I started doing serious development on it (with the start-up) and upgraded to 6GB (soooo much better, thanks Evan!)
  • Participated on a couple of panels for my local Agile meetup group
  • Got an iPhone 4S 🙂 (my 3GS was heavily on the blink :S)
  • Took over as chairman of the young professionals committee for the local branch of the Institution of Engineering and Technology
  • Deepened my experience with Microsoft Azure and thoroughly enjoyed all the enhancements they have made – they have gone a long way since I first started in 2010!

Of course there is heaps more, but this will do for now.