RESTful Reporting with Visual Studio Online

image

My team uses Visual Studio Online for work item tracking and generally speaking it has pretty good baked-in reporting.  I can see an overview of the current sprint, I can see capacity and I can see the burndown.  One area that I’ve always felt it was missing, however, is a way to analyse the accuracy of our estimations.

We actually make pretty good estimations, in general terms: we rarely over-commit and it’s unusual for us to add anything significant to a sprint because we’ve flown through our original stories.  This is based on a completely subjective guess at each person’s capacity and productivity which – over time – has given us a good overall figure that we know works for us.

But is that because our estimates are good, or because our bad estimates are fortuitously averaging out?  Does our subjective capacity figure still work when we take some people out of the team and replace them with others?

This is an area where the reporting within VSO falls down and the limitation boils down to one issue: there is no way to (easily) get the original estimate for a task once you start changing the remaining work.  So how can we get at this information?

Enter the API

I had seen a few articles on the integration options available for VSO but hadn’t really had a chance to look into it in detail until recently.  The API is pretty extensive and you can run pretty much any query through the API that you can access through the UI, along with a bunch of useful team-related info.  Unfortunately the API suffers the same limitation as the VSO portal, but we can work around it using a combination of a little effort and the Work Item History API.

Getting the Data

There is nothing particularly complicated about pulling the relevant data from VSO:

  1. Get a list of sprints using the ClassificationNode API to access iterations
  2. Use Work Item Query Language to build a dynamic query and get the results through the Query API.  This gives us the IDs of each Task in the sprint
  3. For each Task, use the Work Item History API to get a list of all updates
  4. Use the update history to build up a picture of the initial state of each task

Point 4 has a few caveats, however.  The history API only records the fields that have changed in each revision so we don’t always get a complete picture of the Task from a single update.  There are a few scenarios that need to be handled:

  1. Task is created in the target sprint and has a time estimate assigned at the same time.  This is then reduced during the sprint as the Task moves towards completion
  2. Task is created in the target sprint but a time estimate is assigned at a later date before having time reduced as the sprint progresses
  3. Task is created in another sprint or iteration with a time assigned, then moved to the target sprint at a later date
  4. Task is created and worked on in another sprint, then is moved to the target sprint having been partially completed

The simplest scenario (#1 above) would theoretically mean that we could take the earliest update record with the correct sprint.  However, scenario 2 means that the first record in the correct sprint would have a time estimate of zero.  Worse, because we only get changes from the API we wouldn’t have the correct sprint ID on the same revision as the new estimate: it wouldn’t have changed!

The issue with scenario 3 is similar to #2: when the Task is moved to the target sprint the time estimate isn’t changed so isn’t included in the revision.

A simplistic solution that I initially tried was to simply take the maximum historical time estimate for the task (with the assumption that time goes down as the sprint progresses, not up).  Scenario 4 puts an end to this plan as the maximum time estimate could potentially be outside of the current sprint.  If I move a task into a sprint with only half it’s work remaining, I don’t really want to see the other half as being completed in this sprint.

Calculating the Original Estimate: Solution

The solution that I eventually went with here was to iterate through every historical change to the work item and store the “current” sprint and remaining work as each change was made.  That allows us to get the amount of remaining work at each update alongside the sprint in which it occurred; from this point, taking a maximum of the remaining work values gives us a good number for the original amount of work that we estimated.

It does rely on the assumption that Tasks estimations aren’t increased after they have started work (e.g. start at 2 hours, get 1 hour done then realise there’s more work so increase back to 2) but in this scenario we tend to create new tasks instead of adjusting existing ones (we did find more work, after all) which works for us.

Tying it all Together

Once I was able to get at the data it was relatively simple to wrap a reporting service around the implementation.  I went with node & express for the server-side implementation with a sprinkling of angular on top for the client, but visualising the data wasn’t the challenge here!

With this data available I can see a clear breakdown of how different developers affect the overall productivity of the team and can make decisions off the back of this.  I have also seen that having a live dashboard displaying some of the key metrics acts as a bit of a motivator for the people who aren’t getting through the work they expect to, which can’t be a bad thing.

I currently have the following information displayed:

  • Total remaining, completed and in-progress work based on our initial estimates
  • %age completion of the work
  • Absolute leaderboard; i.e. who is getting through the most work based on the estimates
  • Adjusted leaderboard; i.e. who is getting through the most work compared to our existing subjective estimates
  • Current tasks

I hope that the VSO API eventually reaches a point that I can pull this information out without needing to write code, but it’s good to know I can get the data if I need it!

Sorting in KnockoutJS with ko.plus

I have just finished working on some new functionality in ko.plus to allow easy sorting of observable collections.  The key features are:

  • Ability to sort collections on properties and property paths
  • Live sorting that reflects changes to observable properties
  • Binding handlers to drop in sorting functionality in tables

The full documentation is available on GitHub (https://github.com/stevegreatrex/ko.plus) but let’s take a look at some of the features here.

Basic Sorting

The sortable functionality is implemented using an extender so it can be applied to any observableArray in one line:

var myCollection = ko.observableArray([3,1,2])
                     .extend({ sortable: true });

// myCollection -> [1, 2, 3]

Without specifying any options the extender will simply sort based on the value using standard JavaScript sort mechanism.

Property Sorting

A more common use case is to sort based on a property of each object in the collection.

var myCollection = ko.observableArray([
  { id: 1, user: { name: ‘Bob’ } },
  { id: 2, user: name: ‘Adam’  } },
  { id: 3, user: { name: ‘Charlie’  } }
]).extend({
  sortable: {
    key: ‘user.name’
  }
});

The key specified can be any valid property name or property path to access the value on which to sort.  In the above example, the collection will be sorted by the name of the user property on each object.

Observable Properties

If any of the properties in the specified path are observable then these will be used for the sorting and they will react to any changes to the property.

function ItemModel(name) {
  this.name = ko.observable(name);
}

var myCollection =ko.observableArray([
  new ItemModel(‘Adam’),
  new ItemModel(‘Bob’),
  new ItemModel(‘Charlie’)
]);

myCollection()[0].name(‘Dave’);

// myCollection -> [‘Bob’, ‘Charlie’, ‘Dave’]

Binding Handlers

ko.plus includes a new binding handler to assist in sorting a collection on different keys (as would be the case in a table).

<table>
	<thead>
		<tr>
			<th data-bind="sortBy: { source: myCollection, key: 'name' }">Name</th>
			<th data-bind="sortBy: { source: myCollection, key: 'age' }">Age</th>
		</tr>
	</thead>
	<tbody data-bind="foreach: myCollection">
		<!-- etc -->
	</tbody>
</table>

The binding handler has 2 effects:

  1. Attach a click handler to sort on the specified key when the element is clicked
  2. Inject a caret as a child of the element to indicate what sorting is being applied, if any

table

Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Learn through Doing

Tell me and I forget. 

Teach me and I may remember.

Involve me and I learn

Everyone learns in their own way but I have always believed that the best way to learn anything is to try it out. Try and fail, if necessary – failing is only learning that your current approach doesn’t work, as Edison might say – but the important thing is to try.

I find this to be particularly to true with technology: new languages, new frameworks or new concepts. I can see the value in courses and tutorials but I always find that a technology only really feels familiar to me once I have used it in to do something real.

The trick, then, is to find a way of using that exciting new technology you desperately want to learn…

Use it in your Day Job

Where do you do most of your development? Exactly. So if you want to learn AwesomeNewFramework then consider whether or not it would be of use to your company: perhaps it adds something that you can’t do today, or improves on the processes you currently use.

Obviously this is not always possible. Almost all software companies will have established technologies and established products, so changing to a new framework or language is not always practical. People have to be trained, products have to be updated…it might not be worth the effort for an uncertain benefit.

So if we assume that we can’t use it at work, how can we find another project to learn with?

Have an Idea

Working on personal projects or side projects is – in my opinion – always a good idea for any developer. Doing all of your work in a single project or in a single language is a recipe for stagnation.

If you have a great idea for a project and you want to learn a new technology then it is a double benefit. By having a real-world problem to solve you will immediately be forced to look deeper into the technology than you would during any tutorial or course. If someone is walking you through a prepared problem then the information is just handed to you: you may learn how to use function X(...), but you likely won’t learn why you should use it over Z(...), what happens when you leave out the optional parameters, and why the bloody thing won’t work when you need it to!

When you are trying to solve a specific problem you almost-inevitably find a deeper understanding of the code on which it relies.

For me, side projects are always my preferred way of learning. A personal project has no existing structure to confuse or to be misunderstood; it has no limits on what it does or how it works besides those that you decide. Once complete, you – the author – know the story behind every line of code and every design choice.

The only problem is that it does rely on having an idea. Coming up with ideas that are both useful and will not take too long to create is often a sticking point, so how can we come up with real-world scenarios from which to learn without that creative spark?

Solve Someone Else’s Problem

A great way to learn is to teach someone else, and one of the many great things about the internet is that it is full of people who want to be taught!

If you are looking to improve your knowledge of a technology but you don’t have the time to take on a whole project, take a look on Stack Overflow. You’ll find a long list of other people who ask a constant stream of questions – from beginner to advanced levels – about the framework or the language you want to learn.

Some of those questions will be beyond your knowledge; some you will be able to answer immediately. In either case, try to write an answer.

It doesn’t matter if there are already answers, or if you think you might need to go and investigate for 15 minutes before you can respond: by finding a solution and then explaining that solution to someone else you will automatically be improving your own knowledge. As an added bonus, you might have helped another poor soul on their way to understanding as well!

Wrapping Up

In summary, you will always learn more by tackling real-world problems rather than hand-picked scenarios from a tutorial. Ideally you want to use your own problems, but if you don’t have access to the right kind of project just now then go help someone else with theirs!

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1″ in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Chrome Dev Tools & Inline Dynamic JavaScript

If you are using Chrome dev tools to debug your application then you might have come across this situation.  If you dynamically load some content, and that content contains an inline <script> tag, then annoyingly you can’t see that script under Source in the developer console.

Thankfully there’s a nice simple solution to the problem: insert the following tag at the end of your inline script:

<script>
    //...
    //@ sourceURL=MyInlineScript.js
</script>

This will make the script appear in the Sources list under the “No Domain” section:

InlineScript

Remember that if the inline script is part of a Razor view then you will need to escape the @

<script>
    //...
    //@@ sourceURL=MyInlineScript.js
</script>

Are Your Users “Sure”?

Are You Sure?

You’ve probably been asked the question a hundred times already today: are you sure you want to do that?

Cancel

Are you sure you want to delete that file?  Are you sure you want to log out?  To change that extension?  To update this setting?  Are you sure about anything any more?

One of my pet hates is working with an application that is constantly doubting me.  “Yes I’m sure – that’s why I clicked the button!”

Based on personal experience and absolutely no real data, I estimate that I answer “no” to that question…once a week?  A month?  Not often, certainly, and yet over and over again I have to keep expressing my certainty about every little decision.

Just Stop Asking

I’ve been working on a new version of an existing web application recently and one of the key UX decisions has been to stop asking unnecessary questions.  If the user says “jump” then don’t ask if they’re sure; don’t even ask them “how high”…just do it!  We want to trust our users to know what they are doing – wherever possible, we want to do the most obvious thing first and ask questions later.

But What About…

Ah, yes, good point.  Quite often, users don’t know what they’re doing.  And quite often, they’re going to do something wrong.  This, presumably, is the reason that we are constantly asked if we really want to do something: if we complain later then at least the developer can say “well you did say you were sure…”

So how can we account for the (hopefully rare) mistakes and still not get in the way of the average user?

Our approach is to make everything undoable.  Make the consequences of even the serious-sounding actions (“delete this forever?”) recoverable if they happen by accident.  In fact…

Make it as difficult as possible to seriously cock things up

When the user deletes something, give them a little notification saying “Great, that’s gone… unless you click this: [Undo]”.  It doesn’t need to be a big notification – just something that they’ll notice if they’re sat in a cold sweat thinking they’ve just thrown away the last 4 hours’ worth of work.

Undo Popup

Promote Exploration

There’s another up-side to this approach – it encourages users to play around and explore.  If they are confident that they cannot accidentally break something permanently then – hopefully – they will be happier to try something and see what happens.

In a lot of applications, “fear of breaking it” can be a pretty serious barrier to adoption.  It makes sense as well – if someone is not too computer-literate then they probably should be worried about doing something wrong.  But if, when they accidentally delete next months payroll, they have a big friendly button saying “don’t worry – just click here and everything will be back to normal” then you hope that they will worry less about clicking that button next time.