Tag Cloud
#noestimates (1) AJAX (2) ASP.NET MVC (26) ASP.NET Web API (1) Agile (9) Android (1) AngularJS (1) Autofac (5) Azure AD (3) Azure Resource Manager (1) AzureWebFarm (1) BDD (1) C# (51) CSS (1) ColdFusion (3) Cordova (1) Domain Driven Design (1) Entity Framework (1) FluentMVCTesting (1) Git (1) HTML (7) IDE (1) IIS (1) JavaScript (11) MSBuild (3) MSDeploy (6) NHibernate (6) NSubstitute (6) NuGet (11) OctopusDeploy (1) OpenText (1) Pair Programming (3) PhoneGap (2) QUnit (1) REST (1) ReSharper (4) RedDot (1) Ruby On Rails (RoR) (1) Software Engineering (13) TDD (15) TeamCity (7) TestStack (1) Visual Studio (5) Windows Azure (28) Wordpress (1) accelerator for web roles (2) acceptance criteria (1) authentication (2) blogging (1) consistency (4) continuous delivery (15) continuous integration (1) convention (6) cryptography (1) dev ops (17) filter provider (1) forms (6) hypermedia (1) iPad (1) identity (1) jQuery (7) language (1) lean (4) maintainability (1) metro style (2) mocking (8) model binding (1) modern ui (1) powershell (10) razor (3) semantics (3) sql (7) t&l (1) tech evangelism (19) testing (37) text editor (1) unobtrusive coding (9) validation (4) windows 8 (2) winrt (1) xaml (2)
Latest tweets by @robdmoorePosts - Page 10 of 22
NQUnit update
I’ve just pushed a new version of NQUnit and NQUnit.NUnit (1.0.5). This update updates the libraries to use the latest version of Watin, NUnit, jQuery and QUnit. I’ve also made some documentation improvements. You can see the latest code and documentation on GitHub.
It’s worth noting that while NQUnit still works perfectly well, there is a better option that I usually recommend in preference: Chutzpah. I’ve updated NQUnit as per pull requests that the library received - while there are still people using the library I’m happy to keep it up to date.
Unit Testing Model Binders in ASP.NET MVC 4 (and 3)
My friend Matt Davies has done some awesome work exploring the best way to unit test model binders, including how to get the Default Model Binder working. This is a blog post I was thinking of doing myself, so it’s awesome Matt has done the hard work for me :) Check it out!
Test Harness for NuGet install PowerShell scripts (init.ps1, install.ps1, uninstall.ps1)
One thing that I find frustrating when creating NuGet packages is the debug experience when it comes to creating the PowerShell install scripts (init.ps1, install.ps1, uninstall.ps1).
In order to make it easier to do the debugging I’ve created a test harness Visual Studio solution that allows you to make changes to a file, compile the solution, run a single command in the package manager and then have the package uninstall and then install again. That way you can change a line of code, do a few key strokes and then see the result straight away.
To see the code you can head to the GitHub repository. The basic instructions are on the readme:
- [Once off] Checkout the code
- [Once off] Create a NuGet source directory in checkout directory
- Repeat in a loop:
- Write the code (the structure of the solution is the structure of your nuget package, so put the appropriately named .ps1 scripts in the tools folder)
- Compile the solution (this creates a nuget package in the root of the solution with the name Package and version {yymm}.{ddHH}.{mmss}.nupkg - this means that the package version will increase with time so if you install from that directory it will always install the latest build)
or <Ctrl+Shift+B) - Switch to the Package Manager Console <Ctrl+P, Ctrl+M>
- [First time]
Uninstall-Package Package; Install-Package Package
/ [Subsequent times] - When done simply copy the relevant files out and reset master to get a clean slate
Other handy hints
Get-Project
in the NuGet console returns the current project as a DTE object (the same as the$project
parameter that is passed to the NuGet scripts)- Look up the relevant MSDN documentation on the Project DTE item
- If you want to add commands to the NuGet Package Manager Console when people install your script, put the commands in a .psm1 file and load the module from init.ps1 (which is loaded every time your solution loads)
The Idempotency issue when retrying commands with Azure SQL Database (SQL Azure)
There is a lot of information available about dealing with transient errors that occur when using Azure SQL Database. If you are using it then it’s really important to take the transient errors into account since it’s one of the main differences that Azure SQL has when compared to SQL Server.
If you are using .NET then you are in luck because Microsoft have provided an open source library to detect and retry for these transient errors (The Transient Fault Handling Application Block). I have blogged previously about how the guidance that comes with the application block (along with most of the posts, tutorials and forum posts about it) indicate that you need to completely re-architect your system to wrap every single database statement with a retry.
I wasn’t satisfied with that advice and hence I created NHibernate.SqlAzure and more recently ReliableDbProvider (works with ADO.NET, EntityFramework, LinqToSql, etc.). These frameworks allow you to drop in a couple of lines of configuration at one place in your application and unobtrusively get transient fault handling in your application.
Easy right? A silver bullet even? Unfortunately, no.
The Idempotency issue
Today I was made aware of a post made by a Senior Program Manager on the SQL Server team that was posted a few months ago about the Idempotency issue with Azure SQL Database. Unfortunately, I haven’t been able to find any more information about it - if you know anything please leave a comment.
The crux of the problem is that it is possible for a transient error to be experienced by the application when in fact the command that was sent to the server was successfully processed. Obviously, that won’t have any ill-effect for a SELECT statement, and if the SELECT is retried then there is no problem. When you have write operations (e.g. INSERTs, UPDATEs and DELETEs) then you can start running into trouble unless those commands are repeatable (i.e. idempotent).
This is particularly troubling (although in retrospect not terribly surprising) and the frustrations of one of the commenters from the post sums up the situation fairly well (and in particular calls out how impractical the suggested workaround given in the post is):
How exactly would this work with higher abstraction ORMs such as Entity Framework? The updates to a whole entity graph are saved as a whole, along with complex relationships between entities. Can entity updates be mapped to stored procedures such as this in EF? I completely appreciate this post from an academic perspective, but it seems like an insane amount of work (and extremely error-prone) to map every single update/delete operation to a stored procedure such as this.
Approaches
After giving it some consideration and conferring with some of my colleagues, I can see a number of ways to deal with this (you could do something like what was suggested in the post linked to above, but frankly I don’t think it’s practical so I’m not including it). If you have any other ideas then please leave a comment below.
- Do nothing: transient faults (if you aren’t loading the database heavily) are pretty rare and within that the likelihood of coming across the idempotency issue is very low
- In this case you would be making a decision that the potential for “corrupt” data is of a lower concern than application complexity / overhead / effort to re-architect
- If you do go down this approach I’d consider if there is some way you can monitor / check the data to try and detect if any corruption has occurred
- Unique keys are your friend (e.g. if you had a Member table with an identity primary key and some business logic that said emails must be unique per member then you can use a unique key on Member.Email to protect duplicate entries)
- Architect your system so that all work to the database is abstracted behind some sort of unit of work pattern and that the central code that executes your unit of work contains your retry logic
- For instance if using NHibernate you could throw away the session on a transient error, get another one and retry the unit of work
- While this ensures the integrity of your transactions it does have the potential side-effect of making everything a lot slower since any transient errors will cause the whole unit of work to retry (which could potentially be slow)
- Ensure all of your commands are idempotent
- While on the surface this doesn’t sound much better than having to wrap all commands with transient retry logic it can be quite straightforward depending on the application because most update and delete commands are probably idempotent already
- Instead of using database-generated identities for new records use application generated identities (for instance generate a GUID and assign it to the id before inserting an entity) and then your insert statements will also be idempotent (assuming the database has a primary key on the id column)
- NHibernate has automatic GUID generation capabilities for you and you can use the Comb GUID algorithm to avoid index fragmentation issues within the database storage
- Alternatively, you can use strategies to generate unique integers like HiLo in NHibernate or SnowMaker
- If you are doing delete or update statements then you simply need to ensure that they can be executed multiple times with the same result - e.g. if you are updating a column based on it’s current value (e.g. UPDATE [table] SET [column] = [column] + 1 WHERE Id = [Id]) then that could be a problem if it executed twice
- Retry for connections only, but not commands
- Retry for select statements only, but not others (e.g. INSERT, UPDATE, DELETE)
- Don’t use Azure SQL, but instead use SQL Server on an Azure Virtual Machine
Recommendations
With all that in mind, here are my recommendations:
- Don’t shy away from protecting against transient errors - it’s still important and the transient errors are far more likely to happen than this problem
- Use application-generated ids for table identifiers
- Consider what approach you will take to deal with the idempotency issue (as per above list)
Presentation and example code for test fixture data generation
This week I made a presentation to the Perth .NET user group about the content in my test fixture data generation post. As part of that I created some sample code that illustrates the outcome that each of the different techniques I outline in my post post might look - if you are interested in exploring these techniques or seeing some somewhat realistic examples of using the NTestDataBuilder library then I encourage you to take a look.
Furthermore, I’ve just released version 1.0 of the NTestDataBuilder library - my team has been successfully using it for a number of weeks now and I’m happy with the functionality in it.
Enjoy!