Category Archives: Development

Delphi Shelf Life Reaches a New Low

I am going to try to keep this post short and to the point. I don’t want to rant (too much) or say anything I regret (too much), but something has to be said.

When it comes to iOS development, Delphi XE4, a major product released by Embarcadero five months ago, is now obsolete. If you want support for iOS 7 you must buy their new product, Delphi XE5.

Let’s take a step back and look at the facts when it comes to Delphi and Embarcadero chasing the mobile landscape:

  • Delphi XE2 – released September 2011. Claims support for iOS development but, by all reports, fails to deliver. iOS development makes use of the FreePascal compiler and cannot use traditional Delphi code.
  • Delphi XE3 – released September 2012. Support for iOS development completely removed. Anyone who built their iOS app on the foundation of XE2 was left out in the cold.
  • Delphi XE4 – released April 2013. Claims support for iOS development (again). Anyone who wants the iOS development support promised in XE2 must now buy XE4, released as a new product only seven months after XE3.

And now Delphi XE5 has been released only five months after Delphi XE4. It’s another new version and another paid upgrade.

Here’s the real rub though. iOS 7 was just released by Apple. It features a new UI and new navigation elements for apps. Anyone using Xcode (the traditional tool for iOS development which is free) could simply recompile their apps and resubmit them to support iOS 7.

What about Delphi XE4 customers? The ones who just put down their hard earned money for iOS support, again, five months ago? They are left out in the cold. Again. If a Delphi XE4 customer wants to support iOS 7 they must now purchase Delphi XE5. I confirmed this myself with a back-and-forth on Twitter with Jim McKeeth from Embarcadero:

Jim goes on to point out that, if you forgo using the bundled UI kit in your iOS app, you can still leverage the new iOS 7 elements using Delphi XE4:

However, this is basically suggesting the customer not use the same UI technology that is the very heart of Embarcadero’s marketing strategy for several releases now: FireMonkey.

To be clear, I am not upset by a six month release cycle. A lot of companies do that and it’s a smart move for some of them. However, Embarcadero is releasing Delphi like a subscription while selling it like a packaged product. While they offer “Software Assurance” this is a far cry from a subscription. This is an added fee on top of your purchase that allows you to get major upgrades that happen to be released within a year. It’s insurance. It’s the type of thing most of us pass up when checking out at Best Buy.

All-in-all this has just left a horrible taste in my mouth and the mouths of many other developers. My advice? If Delphi has turned into a subscription then charge a subscription. Stop packaging it as something that will be obsolete in 5 months without another purchase.

Resources for the Self Employed Software Developer

After a year of working for myself as a software consultant, this Monday I begin a new position at IDMWORKS. And, while I’ve had a blast being self-employed, I’m very excited to start this new chapter in my career with a lot of really cool ladies and gentlemen.

As I wind down my consulting I thought I’d do a blog post describing some of the resources I’ve used for the past few years in order to work with my customers as a software consultant and freelance developer. One of the fun parts of venturing into this was learning about all of the really awesome services there are out there – and at amazing prices – to help the solo consultant really hit the ground running.

Time Tracking & Invoicing


For tracking time and invoicing customers, I really dig Harvest. It does everything I need and then some, and it’s priced right. Harvest is very easy to use and lets you manage:

  • Clients
  • Projects
  • Time Sheets
  • Invoices
  • Retainers
  • Payments

It also lets you accept payments via PayPal, Stripe, or Authorize.Net, sends out automatic invoice reminders, and more. When it comes to time tracking, they have a very nice HTML5 page for that, or you can use mobile apps and desktop widgets.

And, you can use it for free until you need more than two projects or more than four clients. After that, if you are the only user you are looking at a whopping $12/month.



In order to get projects under way you’ll eventually need to draw up some contracts and get them signed. I’m a fan of Contractually for getting this done. They have a library of contract templates available to customize, and you can save your customized templates for re-use later. From there you can invite folks to review and, optionally, edit the contract online with full version control. Once both parties accept the contract, both can sign the contract digitally. With the latest changes from the team at Contractually, the party you invite to review and sign no longer has to create a Contractually account.

Like Harvest and the rest of the resources on this list, Contractually is priced right. The price has gone up since they launched, but you can still get a solo account for $49/year, which is a bargain for getting this level of ease when it comes to the contract process.

Project/Task Management


So everyone’s all “Trello“! Honestly, I really like Asana for project/task management. It’s a very straight-forward “traditional” task management system that lets you break things down into workspaces, then projects, and finally tasks. Tasks can have sub-task lists, and it’s very easy to invite customers to participate in individual workspaces.

Asana is completely free for teams up to 15 people. After that their pricing model scales up nicely.

Hosted Source Control


As with project management, there’s already another strong contender in this category: GitHub. And I love GitHub, especially for working on collaborative, open source projects. The workflow is just superb. But it costs money to host private repositories, and you must pay more as you add repositories. To me this discourages version controlling projects and keeping them offsite.

Bitbucket is a really wonderful product. The only real weakness is that it’s not GitHub. And everyone uses GitHub for collaborative projects. But if you need somewhere to store your private projects with great features and the ability to easily invite your customers, Bitbucket gives you that and is free – including unlimited private repositories – for up to 5 users.

And while we’re on the topic, Atlassian also provides a wonderful Git client for OS X (and Windows) called SourceTree.

Hosted Servers/Services

Windows Azure

If you follow my blog or my Twitter account you’ll know I’m a fan (if sometimes critic) of the Windows Azure services. To me, there is no single stronger tool a self-employed software consultant can have under her belt. Eventually you are going to need to host things somewhere that isn’t your machine. In my experience, the hosting options out there come in two flavors: cheap and horrible, or expensive and great.

Windows Azure gives you hosted environments for many different things, from websites to full virtual machines (Windows and Linux) to SQL data, off-site storage and APIs for mobile applications. And the pricing is very attractive. All of the services let you start off for free and the portal and services are structured in such a way that you will be warned before you are ever billed. From there, the pricing scales very nicely.

Most importantly, Windows Azure is absolutely a high priority for Microsoft. This is obvious from their recent developer conferences and product releases. For now it looks like Windows Azure is more of an Xbox than a Silverlight.



The independent software consultant must constantly stay up-to-date on the available technologies in the field and how (and when) to exploit them. And Pluralsight is just a fantastic resource for training and education on the top technologies in development today. They go far beyond just how-to and include great details on the whys of what you are watching.

And to stick with our established pattern, the pricing don’t suck. Starting at $29/month you get access to their entire catalog of courses. This one is a no-brainer folks.


I’ll still be blogging here, plus I’ll be contributing to the IDMWORKS blog going forward. Feel free to share any resources you’ve found useful in the comments and good luck!

Painless File Backups to Azure Storage

Windows Azure

In my previous post I discussed steps and utilities for backing up Azure SQL Databases in order to guard against data loss due to user- or program-error. Since then I’ve started investigating options for backing up files – specifically those in Azure Virtual Machines – to the same Azure Storage service used previously.

Just like before I was delighted to find an existing app that makes this super easy. The AzCopy utility makes it possible to copy files to and from Azure Storage and local storage, or from Azure Storage to Azure Storage, with a nice set of arguments.

For instance, the following command will copy all of the files in a local folder, recursively, over to Azure Storage. It will overwrite existing files, and it will skip any files that already exist unless the source file is newer. Perfect.

AzCopy.exe C:\Here\Be\Important\Things /destKey:YourSuperLongAzureStorageKey /S /V /Y /XO

The Azure storage account name and access key can be accessed in the Storage section of the portal, by clicking the Manage Access Keys button at the bottom of the Windows Azure Portal.

Azure Storage Information

This command took under a minute to backup 3,000 files to Azure Storage from an Azure Virtual Machine. From there I can keep running the command and it will only copy new files over to Azure Storage, overwriting any existing file.

As in my previous post, my little utility AzureStorageCleanup is a nice companion to this process. I have updated the source on Github to include a -recursive argument, which will remove files within the virtual hierarchy found in blob storage (created by the recursive option in AzCopy).

AzureStorageCleanup.exe -storagename yourstoragename -storagekey YourSuperLongAzureStorageKey -container yourstoragecontainer -mindaysold 60 -recursive

By scheduling AzureStorageCleanup to run with the -recursive option, you can remove old files to keep storage use in-check.

Painless Azure SQL Database Backups

Windows Azure

While the SQL Database service from Windows Azure provides resiliency and redundancy, there is no built in backup feature to guard against data loss due to user- or program-error. The advised way to handle this is to take a three-step approach:

  1. Make a copy of the SQL Database
  2. Backup the database copy to Azure Storage
  3. Maintain & remove any outdated backups on blob storage

The process in Windows Azure that backs up a SQL Database to blob storage is not transactionally consistent, which is why the initial database copy is required.

Richard Astbury has provided an excellent tool, SQLDatabaseBackup, that takes care of the first two steps with little fuss:

SQLDatabaseBackup.exe -datacenter eastus -server hghtd75jf9 -database MyDatabase -user DbUser -pwd DbPassword -storagename mybackups -storagekey YourSuperLongAzureStorageKey -cleanup

The data center and server name can be obtained from the SQL Databases section of the Windows Azure Portal.

SQL Database Information

The Azure storage account name and access key can be accessed in the Storage section of the portal, by clicking the Manage Access Keys button at the bottom of the portal.

Azure Storage Information

Finally, by specifying the -cleanup argument, the utility will delete the SQL Database copy it creates after the backup is successfully created.

And while the pricing for Azure blob storage is very affordable, you may want to automate the process of deleting old backups. I’ve created a very simple utility that does just that. AzureStorageCleanup uses command line arguments that mirror the SQLDatabaseBackup project (as it is meant to compliment its use):

AzureStorageCleanup.exe -storagename mybackups -storagekey YourSuperLongAzureStoragekey -container sqlbackup -mindaysold 60

The above command will remove files equal-to-or-older-than sixty days from the container “sqlbackup” – the default container used by SQLDatabaseBackup. The details of each file deleted are printed to the console.

By scheduling these two utilities on an available machine you’ll have painless, affordable backups for any of your Windows Azure SQL Databases.

Integration Testing ASP.NET MVC Projects with SpecsFor.Mvc


I’ve recently spent some time looking into different frameworks for doing integration testing for ASP.NET MVC projects. One that caught my eye almost immediately was SpecsFor.Mvc. Unlike other solutions I found for writing integration tests, SpecsFor.Mvc lets you write tests in a fashion that is very similar to writing unit tests, using strongly-typed access to your application’s data without having to script or dig into the DOM.

Some nice things that SpecsFor.Mvc provides out-of-the-box:

  • Hosts your ASP.NET MVC project, building a specified configuration of your project and then hosting it automatically under an instance of IIS Express
  • Provides strongly-typed methods for navigating to controllers and actions, checking route results, and filling out and submitting forms
  • Provides access to validation data, including access to the validation summary as well as the validity of each property in your view’s model

SpecsFor.Mvc uses Selenium WebDriver internally in order to drive the browser. You can still access the Selenium IWebDriver interface any time you need to dig further into your page.

Let’s take a look at these things in practice by writing a few hypothetical tests written against the stock ASP.NET MVC Internet Application.

Starting the Project

To get started, create a new ASP.NET MVC 4 project in Visual Studio.

New ASP.NET MVC 4 Web Application

Select the Internet Application project template, check the option to create a unit test project and click OK.

Internet Application Project Template

Once both the ASP.NET MVC project and the unit test project have been created, right-click the References folder under your Tests project and click Manage Nuget Packages.

Manage Nuget packages

Under Online, search for and install the official SpecsFor.Mvc Nuget package.

SpecsFor.Mvc Nuget Package

Initializing the Hosting Environment

The next thing that we need to add to the Tests project is some code that will initialize the IIS Express hosting environment using the classes provided by SpecsFor.Mvc. To do this, create a new class called MvcAppConfig with the following contents (adjust the namespace as needed):

using Microsoft.VisualStudio.TestTools.UnitTesting;
using SpecsFor.Mvc;

namespace SpecsForMvcDemo2.IntegrationTests
    class MvcAppConfig
        private static SpecsForIntegrationHost integrationHost;

        public static void MyAssemblyInitialize(TestContext testContext)
            var config = new SpecsForMvcConfig();


            config.BuildRoutesUsing(r => RouteConfig.RegisterRoutes(r));


            integrationHost = new SpecsForIntegrationHost(config);

        public static void MyAssemblyCleanup()

The class is marked as a TestClass even though there are no explicit methods to test. This is so that the MyAssemblyInitialize() and MyAssemblyCleanup() methods run. In order for the AssemblyInitialize and AssemblyCleanup attributes to work the class must be marked with the TestClass attribute. With this code in place, MyAssemblyInitialize() will run once before all of the test methods in the project and MyAssemblyCleanup() will run after they all complete.

The code found in MyAssemblyInitialize() is fairly straight-forward given the clarity of the SpecsFor.Mvc API. A new SpecsForMvcConfig instance is created and set to use IIS Express with a given project name and configuration name. Next, a call to BuildRoutesUsing is made in order to register the various controllers and actions with SpecsFor.Mvc. Finally, the browser is specified and the configuration is used to start a new instance of the SpecsForIntegrationHost.

The MyAssemblyCleanup() method, paired with the AssemblyCleanup attribute, is used to shut down the integration host after all the tests have completed.

Initializing the Browser

Now that we have code in place to host the ASP.NET MVC site before any tests are run, we need some code in place to create an instance of our MVC application in a browser. Right-click the Tests project and add a new Unit Test.

Add Unit Test

Add the following code to the top of your new UnitTest1 class, before the TestMethod1 declaration:

private static MvcWebApp app;

public static void MyClassInitialize(TestContext testContext)
    app = new MvcWebApp();

This will require adding a using statement for SpecsFor.Mvc.

Using SpecsFor.Mvc

This new method, MyClassInitialize() will run before all of the tests in the new UnitTest1 class. It will create a new instance of the MvcWebApp class which will launch the browser with your application loaded.

If you go ahead and run the tests for UnitTest1 now you’ll see that two console windows are opened, one for IIS Express hosting the ASP.NET application and one for the Selenium WebDriver that is driving your application. In addition, after the Selenium WebDriver console window is opened, the browser specified in the MvcAppConfig class will be launched.

Note that you may get a prompt from Windows Firewall that you’ll need to allow.

Firewall Alert

Because we haven’t actually written any tests yet, all these windows will close after they are opened, but this demonstrates that these few lines of code used to bootstrap the environment are working.

Authentication Tests

Now that all the setup work is done, let’s see what some actual integration tests look like using SpecsFor.Mvc. The first test will ensure that, if a user tries to navigate to the /account/manage route of the ASP.NET MVC application without logging in, they will be redirected to the login screen.

public void AccountManage_WithoutSession_RedirectsToLogin()
    AccountController.ManageMessageId? messageId = null;
    app.NavigateTo<AccountController>(c => c.Manage(messageId));

    const string returnUrl = "%2fAccount%2fManage";
    app.Route.ShouldMapTo<AccountController>(c => c.Login(returnUrl));

This test will require adding two new items to the using statements: YourProjectName.Controllers and MvcContrib.TestHelper (MvcContrib.TestHelper is needed for the call to ShouldMapTo).

And that’s it for the first integration test. I love it. It’s clear, concise, and (aside from the return URL path) it’s strongly typed. The call to NavigateTo will navigate to the URL corresponding to the AccountController and the Manage action, specified in the lambda expression. The call to ShouldMapTo will ensure that the resulting route corresponds to the AccountController and Login action (with the proper ReturnUrl parameter).

Let’s add two more tests to illustrate a few more examples using SpecsFor.Mvc:

public void Login_InvalidInput_TriggersValidation()
    app.NavigateTo<AccountController>(c => c.Login(string.Empty));
        .Field(f => f.UserName).SetValueTo(string.Empty)
        .Field(f => f.Password).SetValueTo(string.Empty)

        .Field(f => f.UserName).ShouldBeInvalid();
        .Field(f => f.Password).ShouldBeInvalid();

public void Login_InvalidCredentials_TriggersValidation()
    app.NavigateTo<AccountController>(c => c.Login(string.Empty));
        .Field(f => f.UserName).SetValueTo(Guid.NewGuid().ToString())
        .Field(f => f.Password).SetValueTo(Guid.NewGuid().ToString())


These tests will require adding a using statement for YourProjectName.Models so that the LoginModel class can be accessed.

Again, looking at the code, I love the simplicity and clarity in the SpecsFor.Mvc tests. I can use NavigateTo to navigate to my controller and action, and then use FindFormFor to access my view’s model. Finally I can submit the form with easy access to the resulting validation data.

Unfortunately, if you try to run these new tests right now they will fail. The reason is that the SpecsFor.Mvc initialization code compiles and deploys a fresh copy of the ASP.NET MVC project to a TestSite folder within the Debug folder. The App_Data folder contents are not included in the ASP.NET MVC Visual Studio project. So, the database files are not deployed to the TestSite folder and the site itself will YSOD if you try to do anything requiring the database.


To fix this, right-click the App_Data folder in your main MVC project and click Add>Existing Item.

Add Existing Item

Then, add the two files found in your physical App_Data folder to the project (you’ll need to run the MVC site and access the database once manually).

After adding the MDF and LDF files to the project you should be able to run all of the authentication integration tests without error.

The Big But

Pee-Wee Big But

Now this all sounds great, but

At the time I’m writing this, SpecsFor.Mvc tests run great under third-party test runners such as TestDriven.Net and CodeRush. However, the tests don’t run under Visual Studio’s MSTest runner. Trying to run the tests using Visual Studio’s built in test runner will result in a “Build failed” error. The author of SpecsFor.Mvc has reproduced the issue and is hoping to have it fixed within a couple of days.

UPDATE: This issue has since been resolved by Matt and is no longer a problem in version 2.4.0. No more buts!


Using Bootstrap with the DevExpress ASP.NET Data Grid

DX + Bootstrap
I’ve been having a lot of fun lately (and been quite productive) using Bootstrap as a way to lay my sites out before giving them a final visual style. The past three websites I’ve done have used Bootstrap and I love the CSS classes it provides and the speed with which I can develop a nice, consistent, responsive site with it.

In my most recent project I’ve been working on integrating some of the MVC Extensions from DevExpress with good success. However, one quirk had me scratching my head. My customer was generally very happy with the ASP.NET Data Grid but wanted a few additional features, one being the ability for the user to specify the page size for the grid. Easy enough – I thought – it’s just a setting after all.

However, this is what I saw after enabling the setting:

Page Size Item Before

After some poking around using the Developer Tools in Chrome, I was able to identify the CSS in Bootstrap that was interfering with the rendering of the ASP.NET Data Grid. Here is the CSS I used to fix the issue:

/* for playing happy with DX */
td.dxpDropDownButton img {
    max-width: none;

td.dxpComboBox input {
    margin-bottom: 0px;
    padding: 0px 0px;

With that bit of CSS in place the control now renders properly:

Page Size Item After

Note that there’s also a post available from DevExpress here on fixes for common CSS issues with Bootstrap. However, using that method requires overwriting your bootstrap.css file.

Running KnockoutJS Unit Tests with Chutzpah

In my previous blog post I discussed some of the specifics involved with unit testing JavaScript code that uses KnockoutJS and Web API. This blog posts builds on the example discussed in the previous post. If you missed that post you can read more about it here.

While unit testing our ViewModel was all working great, the real icing on the cake is getting JavaScript unit testing working without a browser, integrated into Visual Studio. And it would be even better if the test runs and results could be integrated into the Visual Studio Test Explorer. That’s where two separate Visual Studio extensions come in to play: the Chutzpah JavaScript Test Runner and the Chutzpah Test Adapter for Visual Studio. Both can be installed directly from the Extensions and Updates window in Visual Studio.

Chutzpah Extensions

The first extension allows you to right-click on your tests.js file and click a new “Run JS Tests” menu item to run your QUnit tests without launching a browser.

Run JS Tests

In order to for this to work, though, you must tell Chutzpah where to find the other JS files that your tests require (as it will not be launching your tests.html in a browser). To do this, add the following lines to the top of our tests.js file:

/// <reference path="../Scripts/jquery-1.9.1.js" />
/// <reference path="../Scripts/knockout-2.2.1.debug.js" />
/// <reference path="../Scripts/app/namespace.js" />
/// <reference path="webapiclient.stub.js" />
/// <reference path="../Scripts/app/model.js" />
/// <reference path="../Scripts/app/viewmodel.js" />

This is the same format used by the _references.js file that Visual Studio uses for JavaScript IntelliSense. With these lines in place, you can now right-click the tests.js file and click Run JS Tests, seeing the results right within Visual Studio:

Test Results in Visual Studio

Even cooler, with the Test Adapter installed, you can click CTRL+R, A to run all of the unit tests in your solution, and your QUnit tests will be run too, with their results displayed within the Visual Studio testing UI:

Test Results in Test Explorer

There is one catch I’ve found when using Chutzpah as a JavaScript test runner for KnockoutJS projects: if you right-click your tests.js file and click “Run JS Tests in browser”, Chutzpah will automatically generate an HTML file for the JS file and display that in a browser.

Run JS Tests in browser

However, the default template for the HTML used by Chutzpah puts the JavaScript references in the HTML head instead of at the bottom of the body. Without any changes, using the “Run JS Tests in browser” feature from Chutzpah, along with KnockoutJS, will result in an error running tests:

Test Results Wrong Order

To fix this you need to find and edit the HTML template used by Chutzpah. Search your C: drive for the text “Chutzpah” – under Windows Vista and up this should be located in a subfolder of C:\Users\UserName\AppData\Local\Microsoft\VisualStudio. For instance, the path on my system is:


Once you have found the folder, open the TestFiles\QUnit\qunit.html file:

<!DOCTYPE html>

    <h1 id="qunit-header">Unit Tests</h1>
    <h2 id="qunit-banner"></h2>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
    <div id="qunit-fixture"></div>

Move the lines referencing the JS files to the bottom of the body, leaving the CSS reference in the head tag:

<!DOCTYPE html>

    <h1 id="qunit-header">Unit Tests</h1>
    <h2 id="qunit-banner"></h2>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
    <div id="qunit-fixture"></div>

Save your changes and that’s it! You can now delete the tests.html file from the project if you’d like and use Chutzpah to run tests both within the Visual Studio IDE and within the browser.

Run JS Tests in browser - Fixed