ms-band

Microsoft Band 2: Micro Review

I, like most of the human race, started 2016 with an an absolute conviction to improve my fitness and I, like a decent percentage of people, decided that the best way to fool myself into following through was to invest in a fitness tracker.

I had looked into various options in the past but never really felt that there was a product out that there that ticked all the boxes.  When Microsoft released the second iteration of their fitness band – promising sleep monitoring, GPS run tracking and more sensors than I know what to do with – I thought it was time to take a punt.

I have now been using the Microsoft Band 2 for two weeks and felt it was about time to share my thoughts.

Comfort

My biggest concern with any fitness tracker was always that it would not be comfortable enough that I would actually wear it.  The first couple of days after switching from a traditional watch certainly felt a bit strange: the band is bulkier than anything I had worn before and would quite often get caught on cuffs, but it didn’t take long before it felt pretty comfortable.

The Microsoft band has been designed so that you wear the “watch face” on the inside of your wrist and once you adjust to this it feels very natural.  The alignment of the text (being wider than it is tall) is almost impossible to comfortably read with the face on the outside of the wrist and it takes very little time to adjust.

You can have the watch display on constantly but I have gone with the “rotate on” mode where you flick your wrist to light up the display.  This works well but has a little more delay than I’d like when quickly checking the time.

Sleep

Sleep tracking was one of the key features for me as I have always been interested in the quality of my sleep.  The Microsoft Band promised to deliver in-depth monitoring as well an “optimum wake up” alarm and so far I have been very impressed.  The app (running on Android) gives genuinely interesting feedback on how I have slept every morning, along with recommendations on how to improve the quality of my rest (e.g. “you are taking a long time to fall asleep; try avoiding mental stress late at night”).

The alarm appears to work very well; I have not been using it for too long but so far it seems to wake me up when I feel more awake than would a normal alarm.  It also has the significant benefit of being silent – you are woken up by the band vibrating on your wrist – which has proven very popular with my wife when I have an early start!

Running

Many years ago I treated myself to a Garmin GPS watch for running.  It was about the size of a small matchbox strapped to your arm and came with a chest strap to track your heart rate whilst running.  At the time it was very impressive and I probably shouldn’t be surprised that the Microsoft Band has improved upon 5-year-old technology, but the step up seems very marked.

The band tracks your heart rate, pace, distance (with or without GPS) and gives you up to 7 customisable data points on your wrist while you run.  It seems pretty accurate as these things go, and the feedback – both live and through the app after your run – is useful.  It integrates with various other apps like RunKeeper and MyFitnessPal as well, so your pace, distance and calorie burn records are still all replicated where they always were before.

A couple of tips for the first time you go out though: firstly, wait for the band to get GPS lock before you hit the road.  It will tell you that it can pick up GPS as you run but has not managed to do so over a quick 5k for me when I tried.  Secondly, I would recommend avoiding long sleeves when running.  The inside-of-the-wrist setup works very well if you’re in short sleeves but trying to pull you sleeve up to view the numbers on the inside felt very uncomfortable when I was out running.

Smart Watch Features

Compared to things like the fitbit or jawbone offerings, the Microsoft Band has a number of smart-watch-esque features that seemed pretty tempting to me when I bought it.  You can have SMS, emails, call, calendar and other notifications delivered to your wrist over bluetooth and generally this works really well.  If you turn on “other notifications” it can get a little bit silly – on one occasion I received by-the-minute updates on the charging status of my phone – but you have the option to filter which apps are able to push notifications to the band so you can make it useful.  It’s a nice feature to have when there is no native support for things like whatsapp or slack: you can still get the notifications on your wrist; you just lose the ability to reply.

For things like calls, SMS and email the ability to send canned responses is surprisingly useful when sat in meetings.  You can customise the available replies and – if you really want – you can even type out custom responses with an on-band keyboard (though I wouldn’t recommend it for anything more than a word or three).

The only issue I have with the smart watch functionality is that it seems to make a real difference to the battery life.  It’s nice to have, but I bought this as a fitness tracker and find myself turning off the extra features to get a few extra hours of power.  That leads me on to…

Battery Life

Microsoft advertise the Band 2 as having 48h of battery life and whilst I wouldn’t say this is completely off the mark it does seems a little generous.  If I have the smart watch features turned on then I am lucky to get a day and a half of wear out of it.

With my phone I have fallen into the pattern of leaving it on charge overnight but the complication with the band is that I want to be wearing it overnight for the sleep tracking.  This removes the natural time that you would charge the device and makes the planning of charging a bit of a challenge.

What makes life a lot easier is that the band charges incredibly quickly.  It only takes around half an hour to get up to full charge from close to zero so I find myself falling into a pattern of plugging in the band whilst I get dressed in the morning.  Couple that with the odd ad-hoc charge at my desk and I’ve not had any real down time.  As a system it’s just about working, but it does feel like I may be missing out on some of the features in the interest of keeping the thing running.

Summary

Overall I’m very happy with the band and would gladly recommend it.  There are a couple of rough edges to be smoothed out but they don’t take away from the core functionality of  a fitness band and for that specific job it is doing everything I can ask of it.

The integration with other apps is nicely done and works very well.  The API for the cloud data store looks promising as well, though that is an investigation for another day…

maintaining-class-context

Maintaining Context in TypeScript classes

TypeScript is generally pretty good at persisting this in functions but there are certain circumstances where you can (either accidentally or deliberately) get a class function to run in the wrong context.

class Example {
  private name = 'class context';

  public printName() {
    console.log(this.name);
  }
}

var example = new Example();
example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'wrong context'

The most common scenario where I have accidentally caused this behaviour is where a function is bound to a click handler in Knockout and is executed in the context of the DOM element instead of the containing class.

In JavaScript you can always use myFunction.bind(this) to force the context but having to do that in the TypeScript constructor feels messy…

class Example {
  private name = 'class context';

  constructor() {
    this.printName = this._printName.bind(this);
  }

  private _printName() {
    console.log(this.name);
  }
}

var example = new Example();

example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'class context'

Thankfully there’s an easy way to get TypeScript to correctly play ball.  Instead of defining the function inline, assign a lambda expression to a public class variable:

class Example {
  private name = 'class context';

  public printName = () => {
    console.log(this.name);
  }
}

var example = new Example();

example.printName();
// => 'class context'
example.printName.call({ 
  name: 'wrong context' 
});
// => 'class context'

JS Bin on jsbin.com

Much neater!

Individual isEditable support in ko.plus

ko.plus has supported both individual (ko.editable(...)) and object-level (ko.makeEditable(target)) editable implementations for some time but the 2 implementations differ slightly. The object-level version supports a per-object isEditable value to enable or disable the beginEdit call but this has previously been absent from the individual implementation.

From version 0.0.25 this is now supported.

var value = ko.editable();
value.isEditable = ko.observable(true); //or ko.computed, or raw value
value.beginEdit(); //has no effect
value.isEditing(); // --> false

As with the object-level version, any one of a raw value, observable value or computed value is supported and will be re-evaluated whenever beginEdit is called.

Enjoy!

Faking Mouse Events in D3

D3

D3 is a great library but one of the challenges I have found is with unit testing anything based on event handlers.

In my specific example I was trying to show a tooltip when the user hovered over an element.

hoverTargets
 .on('mouseover', showTooltip(true))
 .on('mousemove', positionTooltip)
 .on('mouseout', closeTooltip);

D3 doesn’t currently have the ability to trigger a mouse event so in order to test the behaviour I have had to roll my own very simple helper to invoke these events.

$.fn.triggerSVGEvent = function(eventName) {
 var event = document.createEvent('SVGEvents');
 event.initEvent(eventName,true,true);
 this[0].dispatchEvent(event);
 return $(this);
};

This is implemented as jQuery plugin that directly invokes the event as if it had come from the browser.

You can use it as below:

$point
 .triggerSVGEvent('mouseover')
 .triggerSVGEvent('mousemove');

It will probably change over time as I need to do more with it but for now this works as a way to test my tooltip behaviour.

 

library-api

Custom Operation Names with Swashbuckle 5.0

This is a post about Swashbuckle –  a .NET library that seamlessly adds Swagger support to WebAPI projects.  If you aren’t familiar with Swashbuckle then stop reading right now and go look into it – it’s awesome.

library-api

Swashbuckle has recently released version 5.0 which includes (among other things) a ridiculous array of ways to customise your generated swagger spec.

One such customisation point allows you to change the operationId (and other properties) manually against each operation once the auto-generator has done it’s thing.

Why Bother?

Good question.  For me, I decided to bother for one very specific reason: swagger-js.  This library can auto-generate a nice accessor object based on any valid swagger specification with almost no effort, whilst doing lots of useful things like handling authorization and parsing responses.

swagger-js uses the operationId property for method names and the default ones coming out of Swashbuckle weren’t really clear or consistent enough.

Injecting an Operation Filter

The means for customising operations lies with the IOperationFilter interface exposed by Swashbuckle.

public interface IOperationFilter
{
  void Apply(Operation operation, 
    SchemaRegistry schemaRegistry, 
    ApiDescription apiDescription);
}

When implemented and plugged-in (see below), the Apply method will be called for each operation located by Swashbuckle and allows you to mess around with its properties.  We have a very specific task in mind so we can create a SwaggerOperationNameFilter class for our purpose:

public class SwaggerOperationNameFilter : IOperationFilter
{
  public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
  {
    operation.operationId = "???";
  }
}

When you installed the Swashbuckle nuget package it will have created a SwaggerConfig file in your App_Start folder.  In this file you will likely have a long and well-commented explanation of all available configuration points, but to keep things simple we can insert the reference to our filter at the end:

GlobalConfiguration.Configuration
  .EnableSwagger(c =>
  {
    //...
    c.OperationFilter<SwaggerOperationNameFilter>();
  });

Getting the Name

At this point you have a lot of flexibility in how you generate the name for the operation.  The parameters passed in to the Apply method give you access to a lot of contextual information but in my case I wanted to manually specify the name of each operation using a custom attribute.

The custom attribute itself contains a single OperationId property…

[AttributeUsage(AttributeTargets.Method)]
public sealed class SwaggerOperationAttribute : Attribute
{
  public SwaggerOperationAttribute(string operationId)
  {
    this.OperationId = operationId;
  }

  public string OperationId { get; private set; }
}

…and can be dropped onto any action method as required…

[SwaggerOperation("myCustomName")]
public async Task<HttpResponseMessage> MyAction()
{
  //…
}

Once the attributes are in place we can pull the name from our filter using the ActionDescriptor

operation.operationId = apiDescription.ActionDescriptor
  .GetCustomAttributes<SwaggerOperationAttribute>()
  .Select(a => a.OperationId)
  .FirstOrDefault();

Voila!

RESTful Reporting with Visual Studio Online

image

My team uses Visual Studio Online for work item tracking and generally speaking it has pretty good baked-in reporting.  I can see an overview of the current sprint, I can see capacity and I can see the burndown.  One area that I’ve always felt it was missing, however, is a way to analyse the accuracy of our estimations.

We actually make pretty good estimations, in general terms: we rarely over-commit and it’s unusual for us to add anything significant to a sprint because we’ve flown through our original stories.  This is based on a completely subjective guess at each person’s capacity and productivity which – over time – has given us a good overall figure that we know works for us.

But is that because our estimates are good, or because our bad estimates are fortuitously averaging out?  Does our subjective capacity figure still work when we take some people out of the team and replace them with others?

This is an area where the reporting within VSO falls down and the limitation boils down to one issue: there is no way to (easily) get the original estimate for a task once you start changing the remaining work.  So how can we get at this information?

Enter the API

I had seen a few articles on the integration options available for VSO but hadn’t really had a chance to look into it in detail until recently.  The API is pretty extensive and you can run pretty much any query through the API that you can access through the UI, along with a bunch of useful team-related info.  Unfortunately the API suffers the same limitation as the VSO portal, but we can work around it using a combination of a little effort and the Work Item History API.

Getting the Data

There is nothing particularly complicated about pulling the relevant data from VSO:

  1. Get a list of sprints using the ClassificationNode API to access iterations
  2. Use Work Item Query Language to build a dynamic query and get the results through the Query API.  This gives us the IDs of each Task in the sprint
  3. For each Task, use the Work Item History API to get a list of all updates
  4. Use the update history to build up a picture of the initial state of each task

Point 4 has a few caveats, however.  The history API only records the fields that have changed in each revision so we don’t always get a complete picture of the Task from a single update.  There are a few scenarios that need to be handled:

  1. Task is created in the target sprint and has a time estimate assigned at the same time.  This is then reduced during the sprint as the Task moves towards completion
  2. Task is created in the target sprint but a time estimate is assigned at a later date before having time reduced as the sprint progresses
  3. Task is created in another sprint or iteration with a time assigned, then moved to the target sprint at a later date
  4. Task is created and worked on in another sprint, then is moved to the target sprint having been partially completed

The simplest scenario (#1 above) would theoretically mean that we could take the earliest update record with the correct sprint.  However, scenario 2 means that the first record in the correct sprint would have a time estimate of zero.  Worse, because we only get changes from the API we wouldn’t have the correct sprint ID on the same revision as the new estimate: it wouldn’t have changed!

The issue with scenario 3 is similar to #2: when the Task is moved to the target sprint the time estimate isn’t changed so isn’t included in the revision.

A simplistic solution that I initially tried was to simply take the maximum historical time estimate for the task (with the assumption that time goes down as the sprint progresses, not up).  Scenario 4 puts an end to this plan as the maximum time estimate could potentially be outside of the current sprint.  If I move a task into a sprint with only half it’s work remaining, I don’t really want to see the other half as being completed in this sprint.

Calculating the Original Estimate: Solution

The solution that I eventually went with here was to iterate through every historical change to the work item and store the “current” sprint and remaining work as each change was made.  That allows us to get the amount of remaining work at each update alongside the sprint in which it occurred; from this point, taking a maximum of the remaining work values gives us a good number for the original amount of work that we estimated.

It does rely on the assumption that Tasks estimations aren’t increased after they have started work (e.g. start at 2 hours, get 1 hour done then realise there’s more work so increase back to 2) but in this scenario we tend to create new tasks instead of adjusting existing ones (we did find more work, after all) which works for us.

Tying it all Together

Once I was able to get at the data it was relatively simple to wrap a reporting service around the implementation.  I went with node & express for the server-side implementation with a sprinkling of angular on top for the client, but visualising the data wasn’t the challenge here!

With this data available I can see a clear breakdown of how different developers affect the overall productivity of the team and can make decisions off the back of this.  I have also seen that having a live dashboard displaying some of the key metrics acts as a bit of a motivator for the people who aren’t getting through the work they expect to, which can’t be a bad thing.

I currently have the following information displayed:

  • Total remaining, completed and in-progress work based on our initial estimates
  • %age completion of the work
  • Absolute leaderboard; i.e. who is getting through the most work based on the estimates
  • Adjusted leaderboard; i.e. who is getting through the most work compared to our existing subjective estimates
  • Current tasks

I hope that the VSO API eventually reaches a point that I can pull this information out without needing to write code, but it’s good to know I can get the data if I need it!

Sorting in KnockoutJS with ko.plus

I have just finished working on some new functionality in ko.plus to allow easy sorting of observable collections.  The key features are:

  • Ability to sort collections on properties and property paths
  • Live sorting that reflects changes to observable properties
  • Binding handlers to drop in sorting functionality in tables

The full documentation is available on GitHub (https://github.com/stevegreatrex/ko.plus) but let’s take a look at some of the features here.

Basic Sorting

The sortable functionality is implemented using an extender so it can be applied to any observableArray in one line:

var myCollection = ko.observableArray([3,1,2])
                     .extend({ sortable: true });

// myCollection -> [1, 2, 3]

Without specifying any options the extender will simply sort based on the value using standard JavaScript sort mechanism.

Property Sorting

A more common use case is to sort based on a property of each object in the collection.

var myCollection = ko.observableArray([
  { id: 1, user: { name: ‘Bob’ } },
  { id: 2, user: name: ‘Adam’  } },
  { id: 3, user: { name: ‘Charlie’  } }
]).extend({
  sortable: {
    key: ‘user.name’
  }
});

The key specified can be any valid property name or property path to access the value on which to sort.  In the above example, the collection will be sorted by the name of the user property on each object.

Observable Properties

If any of the properties in the specified path are observable then these will be used for the sorting and they will react to any changes to the property.

function ItemModel(name) {
  this.name = ko.observable(name);
}

var myCollection =ko.observableArray([
  new ItemModel(‘Adam’),
  new ItemModel(‘Bob’),
  new ItemModel(‘Charlie’)
]);

myCollection()[0].name(‘Dave’);

// myCollection -> [‘Bob’, ‘Charlie’, ‘Dave’]

Binding Handlers

ko.plus includes a new binding handler to assist in sorting a collection on different keys (as would be the case in a table).

<table>
	<thead>
		<tr>
			<th data-bind="sortBy: { source: myCollection, key: 'name' }">Name</th>
			<th data-bind="sortBy: { source: myCollection, key: 'age' }">Age</th>
		</tr>
	</thead>
	<tbody data-bind="foreach: myCollection">
		<!-- etc -->
	</tbody>
</table>

The binding handler has 2 effects:

  1. Attach a click handler to sort on the specified key when the element is clicked
  2. Inject a caret as a child of the element to indicate what sorting is being applied, if any

table