Deployment and Development using PhoneGap Build for a Cordova/PhoneGap app

I recently worked on a project to build a mobile app in iOS and Android. The team decided to use Cordova and Ionic Framework to build it. I highly recommend Ionic – it’s an amazingly productive and feature-rich framework with a really responsive development team! It enabled us to use AngularJS (and the application structure and maintenance advantages that affords) and really sped up the development time in a number of ways. We were also able to get an almost-native performance experience for the application using features like collection repeat (virtualised list) as well as the in-built performance tuning the Ionic team have performed.

When it was time to set up an automated build pipeline we settled on using PhoneGap Build and its API which we then called from OctopusDeploy as part of a NuGet package we created.

PhoneGap Build is Adobe’s cloud build service that can build iOS, Android, Windows Phone etc. apps for you based on a zip file with your Cordova/PhoneGap files.

We decided on PhoneGap Build because:

  • It has been around a while (i.e. it’s relatively mature)
  • It’s backed by a big name (Adobe)
  • It has a decent API
  • It meant we didn’t have to set up a Mac server, which would have been a pain
  • It’s free for a single project :)

This post covers how we set up our automated deployment pipeline for our native apps using OctopusDeploy and PhoneGap Build. I have also published a companion post that covers everything you need to know about PhoneGap Build (read: what I wish I knew before I started using PhoneGap Build because it’s darn confusing!).

Deployment pipeline

This is the deployment pipeline we ended up with:

  1. Developer commits a change, submits a pull request and that pull request gets merged
  2. Our TeamCity continuous integration server compiles the server-side code, runs server-side code tests and runs client-side (app) tests
  3. If all is well then a NuGet package is created for the server-side code and a NuGet package is created for the app – both are pushed to OctopusDeploy and the server-side code is deployed to a staging environment (including a web server version of the app)
    • Native functionality wouldn’t work, but you could access and use the bulk of the app to test it out
    • To test native functionality developers could easily manually provision a staging version of the app if they plugged a phone into their computer via a USB cable and executed a particular script
  4. The product owner logs into OctopusDeploy and promotes the app to production from staging whereupon OctopusDeploy would:
    • Deploy the server-side code to the production environment
    • Use the PhoneGap Build API to build an iOS version with an over-the-air compatible provisioning profile
    • Download the over-the-air .ipa
      • Upload this .ipa to Azure blob storage
      • Beta testers could then download the latest version by visiting a html page linking to the package (see below for details)
    • Build the production Android and iOS versions of the app using the PhoneGap Build API
  5. The product owner then logs on to the PhoneGap Build site, downloads the .ipa / .apk files and uploads them to the respective app stores

End result

I personally wasn’t completely satisfied with this because I didn’t like the manual step at the end, however, apparently app store deployments are stuck in the 90s so unless you want to screen-scrape the respective app stores you have to manually log in and upload the deployment file. Regardless, the end result was good:

  • Production deployments were the same binaries/content that were tested on the CI server and that had been successfully deployed and manually tested in the staging environment
  • We could go from developer commit to production in a few minutes (ignoring app store approval/roll-out times; remember – stuck in the 90s)
  • The product owner had complete control to deploy any version of the app he wanted to and it was easy for him to do so

How we did it

I have published the NodeJS script we used to automate the PhoneGap Build API to a Gist. It contains a very reusable wrapper around a PhoneGap Build API NodeJS library that makes it dead easy to write your own workflow in a very succinct manner.

To generate the zip file the NodeJS script used we simply grabbed the www directory (including config.xml) from our repository and zipped it up. That zip file and the NodeJS scripts went into the NuGet package that OctopusDeploy deployed when the app was promoted to production (we made this happen on a Tentacle residing on the OctopusDeploy server). I have published how we generated that NuGet package to a Gist as well (including how to set up over-the-air).

Further notes

Some notes about our approach:

  • Because we were able to make our backend apis deploy to production at the same time as generating our production app packages we were assured that the correct version of the backend was always available when the apps were deployed to the respective app stores.
  • It was important for us to make sure that our backend api didn’t have any breaking changes otherwise existing app deployments (and any lingering ones where people haven’t updated) would break – usual API management/versioning practices apply.
  • Whenever we wanted to add new Beta testers the provisioning profile for over-the-air needed to be re-uploaded to PhoneGap Build
    • Unfortunately, PhoneGap Build requires you upload this alongside the private key – thus requiring someone with access to PhoneGap Build, the private key and the iOS developer account – it’s actually quite a convoluted process so I’ve also published the instructions we provided to the person responsible for managing the Beta testing group to a Gist to illustrate
  • We stored the passwords for the Android Keystore and iOS private keys as sensitive variables in OctopusDeploy so nobody needed to ever see them after they had been generated and stored in a password safe
    • This meant the only secret that was ever exposed on a regular basis was the password-protected over-the-air private key whenever a Beta tester was added

TestFlight

While we didn’t use it, it’s worth pointing out TestFlight. It’s a free service that gives you the ability to more easily keep track of a list of testers, their UUIDs, informing them of updates and automatically creating and hosting the necessary files for over-the-air. It contains an API that allows you to upload your .ipa to and it should be simple enough to take the scripts I published above and upload a .ipa to TestFlight rather than Azure Blob Storage if you are interested in doing that.

Final Thoughts

PhoneGap Build is a useful and powerful service. As outlined in my companion post, it’s very different to Cordova/PhoneGap locally and this, combined with confusing documentation, resulted in me growing a strong dislike of PhoneGap Build while using it.

In writing my companion post however, I’ve been forced to understand it a lot more and I now think that as long as you have weighed up the pros and cons and still feel it’s the right fit then it’s a useful service and shouldn’t be discounted outright.

Developing ASP.NET web applications with IIS

When you File -> New Project an ASP.NET application in Visual Studio and then F5 by default it will spin up IIS Express and navigate to the site for you.

IIS Express is pretty cool – it runs under your user account so no need to mess around with elevated privileges, it has most of the power of IIS (think web.config) and it “just works” out of the box without extra configuration needed across all dev machines :)

For projects that I work on every day though, I really dislike using IIS Express as a development server for the following reasons:

  • It will randomly crash
  • When it crashes I have to F5 or Ctrl+F5 in Visual Studio to restart it – I can’t just go to the last url it was deployed to (e.g. http://localhost:port/)
  • If your code has an uncaught exception then a crash dialog pops up in your taskbar in a way that isn’t obvious and requires you to click a button until the code continues running (this can be very confusing)
  • Setting up a custom domain is tricky to do, is a tedious manual process and can’t run on port 80 side-by-side with proper IIS
    • Using a custom domain is often essential too – think sharing cookies between domains or performing something like integrating with third parties where you need to provide a URL other than localhost

Part of the reason IIS Express exists is because setting up IIS with a site is not a trivial process. However, when you do eventually get it set up I usually find it works great from then on:

  • It’s stable
  • The URL is always available – you don’t have to use Visual Studio at all
  • Uncaught exceptions behave as expected
  • Custom domains are easy in both the IIS Manager GUI and via a variety of commandline options

In order to reduce the pain involved with setting up IIS I do two things:

  • I modify my Visual Studio taskbar icon to always run as admin (necessary to open a project bound to IIS)
  • I add a Developer Setup script to the project that developers must run once when they first clone the repository that sets up everything up for them in a matter of seconds (hopefully giving the same Open Solution -> F5 and start developinging experience)
    • I’ve added an example of such a script to a Gist - the script also includes setting up the hosts file and a SQL Express database
    • I can’t claim full credit for the script – it’s been a collaborative effort over a number of projects by all of the Readifarians I’ve worked with :D

Scripted/Automated installation script to set up Cordova/PhoneGap and Android on Windows

I recently worked on a Cordova project and one of the things we found is that it’s an absolute pain to set up a development environment since there is a whole bunch of tools that need to be downloaded and installed and configured in specific ways.

We ended up creating a page in our project’s OneNote notebook with developer setup instructions, but even though we were using Chocolatey it was still a tedious process with numerous console restarts to refresh environment variables (that had to be manually set).

In the process of writing a post on Cordova I wanted to check something and realised I had repaved my machine since the last time I installed the Android SDK / Cordova etc.

I consulted the OneNote page we had created and looked in despair at the instructions. What a PITA! So what did I do?

I spun up a Windows Azure VM and stumbled through creating a PowerShell script to automate the setup. Then I spun up a second VM to check that the script worked :). Then I deleted both of them – probably cost a few cents and the servers had a really fast download speed so the installations were really quick. God I love the cloud :D

I’ve uploaded it to a Gist. If you are setting up a PhoneGap/Cordova & Android development environment then I’m sure it will be useful to you.

Enjoy!

My stance on Azure Worker Roles

tl;dr 99% of the time Worker Role is not the right solution. Read on for more info.

Worker Role Deployments

I quite often get asked by people about the best way to deploy Worker Roles because it is a pain – as an Azure Cloud Service the deployment time of a Worker Role is 8-15+ minutes. In the age of continuous delivery and short feedback loops this is unacceptable (as I have said all along).

On the surface though, Worker Roles are the most appropriate and robust way to deploy heavy background processing workloads in Azure. So what do we do?

Web Jobs

The advice I generally give people when deploying websites to Azure is to use Azure Web Sites unless there is something that requires them to use Web Roles (and use Virtual Machines as a last resort). That way you are left with the best possible development, deployment, debugging and support experience possible for your application.

Now that Web Jobs have been released for a while and have a level of maturity and stability I have been giving the same sort of advice when it comes to background processing: if you have a workload that can run on the Azure Web Sites platform (e.g. doesn’t need registry/COM+/GDI+/elevated privileges/custom software installed/mounted drives/Virtual Network/custom certificates etc.) and it doesn’t have intensive CPU or memory resource usage then use Web Jobs.

I should note that when deploying Web Jobs you can deploy them automatically using the WebJobsVs Visual Studio extension.

As a side note: some of my colleagues at Readify have recently started using Web Jobs as a platform for deploying Microservices in asynchronous messaging based systems. It’s actually quite a nice combination because you can put any configuration / management / monitoring information associated with the micro-service in the web site portion of the deployment and it’s intrinsically linked to the Web Job in both the source code and the deployment.

Worker Roles

If you are in a situation where you have an intense workload, you need to scale the workload independently  of your Web Sites instances or your workload isn’t supported by the Azure Web Sites platform (and thus can be run as a Web Job) then you need to start looking at Worker Roles or some other form of background processing.

Treat Worker Roles as infrastructure

One thing that I’ve been trying to push for a number of years now (particularly via my AzureWebFarm and AzureWebFarm.OctopusDeploy projects) is for people to think of Cloud Services deployments as infrastructure rather than applications.

With that mindset shift, Cloud Services becomes amazing rather than a deployment pain:

  • Within 8-15+ minutes a number of customised, RDP-accessible, Virtual Machines are being provisioned for you on a static IP address and those machines can be scaled up or down at any time and they have health monitoring and diagnostics capabilities built-in as well as a powerful load balancer and ability to arbitrarily install software or perform configurations with elevated privileges!
  • To reiterate: waiting 8-15+ minutes for a VM to be provisioned is amazing; waiting 8-15+ minutes for the latest version of your software application to be deployed is unacceptably slow!

By treating Cloud Services as stateless, scalable infrastructure you will rarely perform deployments and the deployment time is then a non-issue – you will only perform deployments when scaling up or rolling out infrastructure updates (which should be a rare event and if it rolls out seamlessly then it doesn’t matter how long it takes).

Advantages of Web/Worker Roles as infrastructure

  • As above, slow deployments don’t matter since they are rare events that should be able to happen seamlessly without taking out the applications hosted on them.
  • As above, you can use all of the capabilities available in Cloud Services.
  • Your applications don’t have to have a separate Azure project in them making the Visual Studio solution simpler / load faster etc.
  • Your applications don’t have any Azure-specific code in them (e.g. CloudConfiguationManager, RoleEnvironment, RoleEntryPoint, etc.) anymore
    • This makes your apps simpler and also means that you aren’t coding anything in them that indicates how/where they should be deployed – this is important and how it should be!
    • It also means you can deploy the same code on-premise and in Azure seamlessly and easily

How do I deploy a background processing workload to Worker Role as infrastructure?

So how does this work you might ask? Well, apart from rolling your own code in the Worker Role to detect, deploy and run your application(s) (say, from blob storage) you have two main options that I know of (both of which are open source projects I own along with Matt Davies):

  • AzureWebFarm and its background worker functionality
    • This would see you deploying the background work as part of MSDeploying a web application and it works quite similar to (but admittedly probably less robust than) Web Jobs – this is suitable for light workloads
  • AzureWebFarm.OctopusDeploy and using OctopusDeploy to deploy a Windows Service
    • In general I recommend using Topshelf to develop Windows Services because it allows a nicer development experience (single console app project that you can F5) and deployment experience (you pass an install argument to install it)
    • You should be able to deploy heavyweight workloads using this approach (just make sure your role size is suitable)

The thing to note about both of these approaches is that you are actually using Web Roles, not Worker Roles! This is fine because there isn’t actually any difference between them apart from the fact that Web Roles have IIS installed and configured. If you don’t want anyone to access the servers over HTTP because they are only used for background processing then simply don’t expose a public endpoint.

So, when should I actually use Worker Roles (aka you said they aren’t applicable 99% of the time – what about the other 1%)?

OK, so there is definitely some situations I can think of and have come across before occasionally that warrant the application actually being coded as a Worker Role – remember to be pragmmatic and use the right tool for the job! Here are some examples (but it’s by no means exhaustive):

  • You need the role to restart if there are any uncaught exceptions
  • You need the ability to control the server as part of the processing – e.g. request the server start / stop
  • You want to connect to internal endpoints in a cloud service deployment or do other complex things that require you to use RoleEnvironment
  • There isn’t really an application-component (or it’s tiny) – e.g. you need to install a custom application when the role starts up and then you invoke that application in some way

What about Virtual Machines?

Sometimes Cloud Services aren’t going to work either – in a scenario where you need persistent storage and can’t code your background processing code to be stateless via RoleEntryPoint then you might need to consider standing up one or more Virtual Machines. If you can avoid this at all then I highly recommend it since you then need to maintain the VMs rather than using a managed service.

Other workloads

This post is targeted at the types of background processing workloads you would likely deploy to a Worker Role. There are other background processing technologies in Azure that I have deliberately not covered in this post such as Hadoop.

GitVersion TeamCity MetaRunner

I’ve blogged previously about using GitHubFlowVersion for versioning and how I created a TeamCity meta-runner for it.

A lot has happened since then in that space and that has been nicely summarised by my friend Jake Ginnivan. tl;dr GitHubFlow version has been merged with the GitFlowVersion project to form GitVersion.

This project is totally awesome and I highly recommend that you use it. In short:

GitVersion uses your git repository branching conventions to determine the current Semantic Version of your application. It supports GitFlow and the much simpler GitHubFlow.

I’ve gone ahead and developed a much more comprehensive TeamCity meta-runner for GitVersion and I’ve submitted it to the TeamCity meta-runner PowerPack. This meta-runner allows you to use GitVersion without needing to install any binaries on your build server or your source repository – it automatically downloads it from Chocolatey :)

Happy building!

Announcing 1.0.0 of ReliableDbProvider library

I’d like to announce that today I’ve released v1.0.0 of the ReliableDbProvider library. It’s been kicking around for a while, has a reasonable number of downloads on NuGet and has just received a number of bug fixes from the community so I feel it’s ready for the 1.0.0 badge :).

ReliableDbProvider is a library that allows you to unobtrusively handle transient errors when connecting to Azure SQL Database when using ADO.NET, Linq 2 Sql, EntityFramework < 6 (EF6 has similar functionality in-built) or any library that uses ADO.NET (e.g. Massive).

Check it out on GitHub.

Authenticating an ASP.NET MVC 5 application with Microsoft Azure Active Directory

This post outlines how to easily add Azure AD authentication to an existing (or new) ASP.NET MVC 5 (or 3 or 4) application. It also explains what all of the code does.

Practical Microsoft Azure Active Directory Blog Series

This post is part of the Practical Microsoft Azure Active Directory Blog Series.

Add Azure AD Authentication

Note: Ignore the ...‘s and replace the {SPECIFIED_VALUES} with the correct values.

  1. Create an Azure Active Directory tenant; note: AD tenants are not associated with your Azure Subscription, they are “floating” so add any live ids for people you want to administer it as Global Administrators
  2. Create an Application in your AD tenant with audience URL and realm being your website homepage (minus the slash at the end)
    • Record the name of your AD tenant e.g. {name}.onmicrosoft.com
    • Record the GUID of your AD tenant by looking at the FEDERATION METADATA DOCUMENT URL under View Endpoints
  3. Create a user account in your tenant that you can use to log in with
  4. Install-Package Microsoft.Owin.Security.ActiveDirectory
  5. Install-Package System.IdentityModel.Tokens.ValidatingIssuerNameRegistry
  6. Add a reference to System.IdentityModel
  7. Add a reference to System.IdentityModel.Services
  8. Add a Startup.cs file (if it doesn’t already exist) and configure OWIN to use Azure Active Directory
    using System.Configuration;
    using Microsoft.Owin.Security.ActiveDirectory;
    using Owin;
    
    namespace {YOUR_NAMESPACE}
    {
        public class Startup
        {
            public void Configuration(IAppBuilder app)
            {
                app.UseWindowsAzureActiveDirectoryBearerAuthentication(
                    new WindowsAzureActiveDirectoryBearerAuthenticationOptions
                    {
                        Audience = ConfigurationManager.AppSettings["ida:AudienceUri"],
                        Tenant = ConfigurationManager.AppSettings["AzureADTenant"]
                    });
            }
        }
    }
    
  9. Add the correct configuration to your web.config file; change requireSsl and requireHttps to true if using a https:// site (absolutely required for production scenarios)
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <configSections>
        <section name="system.identityModel" type="System.IdentityModel.Configuration.SystemIdentityModelSection, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
        <section name="system.identityModel.services" type="System.IdentityModel.Services.Configuration.SystemIdentityModelServicesSection, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
      </configSections>
    ...
      <appSettings>
        ...
        <add key="ida:AudienceUri" value="{YOUR_WEBSITE_HOMEPAGE_WITHOUT_TRAILING_SLASH}" />
        <add key="ida:FederationMetadataLocation" value="https://login.windows.net/{YOUR_AD_TENANT_NAME}.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml" />
        <add key="AzureADTenant" value="{YOUR_AD_TENANT_NAME}.onmicrosoft.com" />
      </appSettings>
    ...
      <system.webServer>
        ...
        <modules>
          <add name="WSFederationAuthenticationModule" type="System.IdentityModel.Services.WSFederationAuthenticationModule, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="managedHandler" />
          <add name="SessionAuthenticationModule" type="System.IdentityModel.Services.SessionAuthenticationModule, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="managedHandler" />
        </modules>
      </system.webServer>
    ...
      <system.identityModel>
        <identityConfiguration>
          <issuerNameRegistry type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, System.IdentityModel.Tokens.ValidatingIssuerNameRegistry">
            <authority name="https://sts.windows.net/{YOUR_AD_TENANT_GUID}/">
              <keys>
                <add thumbprint="0000000000000000000000000000000000000000" />
              </keys>
              <validIssuers>
                <add name="https://sts.windows.net/{YOUR_AD_TENANT_GUID}/" />
              </validIssuers>
            </authority>
          </issuerNameRegistry>
          <audienceUris>
            <add value="{YOUR_WEBSITE_HOMEPAGE_WITHOUT_TRAILING_SLASH}" />
          </audienceUris>
          <securityTokenHandlers>
            <add type="System.IdentityModel.Services.Tokens.MachineKeySessionSecurityTokenHandler, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
            <remove type="System.IdentityModel.Tokens.SessionSecurityTokenHandler, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
          </securityTokenHandlers>
          <certificateValidation certificateValidationMode="None" />
        </identityConfiguration>
      </system.identityModel>
      <system.identityModel.services>
        <federationConfiguration>
          <cookieHandler requireSsl="false" />
          <wsFederation passiveRedirectEnabled="true" issuer="https://login.windows.net/{YOUR_AD_TENANT_NAME}.onmicrosoft.com/wsfed" realm="{YOUR_WEBSITE_HOMEPAGE_WITHOUT_TRAILING_SLASH}" requireHttps="false" />
        </federationConfiguration>
      </system.identityModel.services>
    </configuration>
    
  10. Configure AntiForgery to use the correct claim type to uniquely identify users
    Global.asax.cs
    
              protected void Application_Start()
              {
                  ...
                  IdentityConfig.ConfigureIdentity();
              }
    
    App_Start\IdentityConfig.cs
    
    using System.IdentityModel.Claims;
    using System.Web.Helpers;
    
    namespace {YOUR_NAMESPACE}
    {
        public static class IdentityConfig
        {
            public static void ConfigureIdentity()
            {
                AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.Name;
            }
        }
    }
    
  11. Configure the application to refresh issuer keys when they change
            public static void ConfigureIdentity()
            {
                ...
                RefreshIssuerKeys();
            }
    
            private static void RefreshIssuerKeys()
            {
                // http://msdn.microsoft.com/en-us/library/azure/dn641920.aspx
                var configPath = AppDomain.CurrentDomain.BaseDirectory + "\\" + "Web.config";
                var metadataAddress = ConfigurationManager.AppSettings["ida:FederationMetadataLocation"];
                ValidatingIssuerNameRegistry.WriteToConfig(metadataAddress, configPath);
            }
    
  12. Add LogoutController
    Controllers\LogoutController.cs
    
    using System;
    using System.IdentityModel.Services;
    using System.Web.Mvc;
    
    namespace {YOUR_NAMESPACE}.Controllers
    {
        public class LogoutController : Controller
        {
            public ActionResult Index()
            {
                var config = FederatedAuthentication.FederationConfiguration.WsFederationConfiguration;
    
                var callbackUrl = Url.Action("Callback", "Logout", null, Request.Url.Scheme);
                var signoutMessage = new SignOutRequestMessage(new Uri(config.Issuer), callbackUrl);
                signoutMessage.SetParameter("wtrealm", config.Realm);
                FederatedAuthentication.SessionAuthenticationModule.SignOut();
    
                return new RedirectResult(signoutMessage.WriteQueryString());
            }
    
            [AllowAnonymous]
            public ActionResult Callback()
            {
                if (Request.IsAuthenticated)
                    return RedirectToAction("Index", "Home");
    
                return View();
            }
        }
    }
    
    Views\Logout\Callback.cshtml
    
    @{
        ViewBag.Title = "Logged out";
    }
    
    <h1>Logged out</h1>
    
    <p>You have successfully logged out of this site. @Html.ActionLink("Log back in", "Index", "Home").</p>
    
  13. Add logout link somewhere@Html.ActionLink("Logout", "Index", "Logout")
  14. Add authentication to the app; do this as you normally would with [Authorize] to specific controller(s) or action(s) or globally by adding to GlobalFilters.Filters.Add(new AuthorizeAttribute());
  15. Load the site and navigate to one of the authenticated pages – it should redirect you to your Azure AD tenant login page whereupon you need to log in as one of the users you created and it should take you back to that page, logged in
  16. The usual User.Identity.Name and User.Identity.IsAuthenticated objects should be populated and if you want access to the claims to get the user’s name etc. then use something like ClaimsPrincipal.Current.FindFirst(ClaimTypes.GivenName).Value

Practical Microsoft Azure Active Directory Blog Series

I finally had a chance to play with Microsoft Azure Active Directory in a recent project. I found the experience to be very interesting – Azure AD itself is an amazing, powerful product with a lot of potential. It certainly has a few rough edges here and there, but it’s pretty clear Microsoft are putting a lot of effort into it as it’s forming the cornerstone of how it authenticates all of it’s services including Office 365.

Azure AD gives you the ability to securely manage a set of users and also gives the added benefit of allowing two-factor authentication (2FA), single-sign-on across applications, multi-tenancy support and ability to allow external organisations to authenticate against your application.

This blog series will outline the minimum set of steps that you need to perform to quickly and easily add Azure AD authentication to an existing ASP.NET MVC 5 (or 3 or 4) site (or a new one if you select the No Authentication option when creating it) as well as configure things like API authentication, role authorisation, programmatic logins and deployments to different environments.

There are already tools and libraries out there for this – why are you writing this series?

Microsoft have made it fairly easy to integrate Azure AD authentication with your applications by providing NuGet packages with most of the code you need and also tooling support to configure your project in Visual Studio. This is combined with a slew of MSDN and technet posts covering most of it.

When it comes to trying to understand the code that is added to your solution however, things become a bit tricky as the documentation is hard to navigate through unless you want to spend a lot of time. Also, if you have Visual Studio 2013 rather than Visual Studio 2012 you can only add authentication to a new app as part of the File -> New Project workflow by choosing the Organizational Authentication option:

ASP.NET Organizational Authentication option
Visual Studio: File > New Project > ASP.NET > Change Authentication > Organizational Authentication

If you have an existing ASP.NET web application and you are using Visual Studio 2013 then you are out of luck.

Furthermore, the default code you get requires you to have Entity Framework and a database set up, despite the fact this is only really required if you are using multiple Azure AD tenants (unlikely unless you are creating a fairly hardcore multi-tenant application).

If you then want to add role-based authentication based on membership in Azure AD groups then there is no direction for this either.

For these reasons I’m developing a reference application that contains the simplest possible implementation of adding these features in an easy to follow commit-by-commit manner as a quick reference. I will also provide explanations of what all the code means in this blog series so you can understand how it all works if you want to.

You can see the source code of this application here and an example deployment here. The GitHub page outlines information such as example user logins and what infrastructure I set up in Azure.

What are you planning on covering?

This will be the rough structure of the posts I am planning in no particular order (I’ll update this list with links to the posts over time):

  • Authenticating an ASP.NET MVC 5 application with Microsoft Azure Active Directory
  • Explaining the code behind authenticating MVC5 app with Azure AD
  • Deep-dive into how to use the Azure Portal to create the relevant parts of the infrastructure (with screenshots)
  • Add role-based authorisation based on Azure AD group membership
  • How to get a list of all users in a particular group
  • How to use a different application / AD tenant in different environments with web.config transformations
  • How to authenticate against ASP.NET Web API using Azure AD
  • How to authenticate programmatically in a console app (or other non-human application)
  • How to use the default multi-tenant support using Entity Framework

I’m notoriously bad at finishing blog series‘ that I start, so no promises on when I will complete this, but I have all of the code figured out in one way or another and the GitHub should at least contain commits with all of the above before I finish the accompanying posts so *fingers crossed*! Feel free to comment below if you want me to expedite a particular post.

More resources

I came across some great posts that have helped me so far so I thought I’d link to them here to provide further reading if you are interested in digging deeper:

Announcing repave.psm1

So after 18 months of not repaving my machine and occasionally (especially lately) having to deal with the machine filling up and slowing down I’m finally at the point where it’s time to repave. I wanted to do it ages ago, but I avoided it because of how painful it is to do.

This time around I’ve decided to bite the bullet and finally do something I’ve been meaning to do all along – create a script to make it much easier / quicker as well as form documentation about what programs / what setup I want for my machine.

I’ve been interested in Chocolatey and Boxstarter for ages to do this very thing. In this instance I didn’t bother using Boxstarter since I didn’t have any restarts in there, but I encourage people to look into it particularly if doing VM install scripts – it’s AMAZING.

I started writing this crazy PowerShell script to automate all the installs and settings I wanted and eventually I refactored it until it became like this. I think it’s really readable and maintainable and acts really well as documentation for myself.

While developing it I initially had a bunch of cinst calls, but the problem with that is each call incurs a 2s startup cost for some reason – this made developing it painful. In order to develop the script incrementally (I was doing it inside of a VM so I could trash it and start from the beginning again) I wanted three things:

  • Speed (if something is already installed I want it to skip it instantly, not wait for cinst to spin up for 2s)
  • Idempotency (I want to run and re-run the script again and again and again after making small changes to see their effect)
  • Fail fast (if something is wrong I want it to just fail and print an error so I can see what happened – I don’t want it to continue trying to install other things that might be dependent on the thing that failed)

I managed to achieve all of that and the other advantage I see in this approach is that it makes it really easy for me to reuse the script as an update mechanism if I decide to change things between re-paves. This is awesome and I think makes the script way more useful.

Long-story short: I’ve abstracted all of the main functionality into a PowerShell module and open-sourced it as repave.psm1 on GitHub. Check it out and feel free to fork it to create your own scripts and submit back a pull request with any fixes or additions.

It’s a bit rough around the edges since I’ve knocked it up in a hurry this weekend, but I did put in some initial documentation to describe all the functionality and there are two example scripts in there that use it.

Enjoy!

Blog about software engineering, web development, agile, C#, ASP.NET and Windows Azure.