Hiding ProxyApi Routes from Web API Help Pages

If you are using ProxyApi and you have tried out the Web API Help Pages feature then you will have noticed a bunch of duplicate routes showing up for all of your actions that look something like this:

GET /api/{proxy}/Controller/Action?foo=bar

ProxyApi needs to be certain of the Route-to-Controller/Action mapping in order to correctly generate the JavaScript proxies, and it achieves this by inserting a custom route at the start of the route table so that it will always take precedence (if matched).

Unfortunately the Web API ApiExplorer finds these routes, maps them to the action and generates a duplicate route for every action in your API!

Getting Rid of the Routes

Thankfully it is very simple to filter these out.  When you add the Web API help pages package to your project it will generate a LOT of code that builds and renders the help page content.  This gives you plenty of entry points in which you can intercept and hide the ProxyApi-specific routes.

For our purposes here we can subclass the ApiExplorer class and filter out any route that contains “{proxy}”.

public class CustomApiExplorer : ApiExplorer
{
  public CustomApiExplorer(HttpConfiguration config) : base(config)
  {}

  public override bool ShouldExploreAction(string actionVariableValue, HttpActionDescriptor actionDescriptor, IHttpRoute route)
  {
    if (route.RouteTemplate.ToLower().Contains("{proxy}"))
      return false;

    return base.ShouldExploreAction(actionVariableValue, actionDescriptor, route);
  }
}

Now we just need to plug this implementation in instead of the default…

//in your help page configuration
config.Services.Replace(typeof(IApiExplorer), new CustomApiExplorer(config));

…and we’re done!

Learn through Doing

Tell me and I forget. 

Teach me and I may remember.

Involve me and I learn

Everyone learns in their own way but I have always believed that the best way to learn anything is to try it out. Try and fail, if necessary – failing is only learning that your current approach doesn’t work, as Edison might say – but the important thing is to try.

I find this to be particularly to true with technology: new languages, new frameworks or new concepts. I can see the value in courses and tutorials but I always find that a technology only really feels familiar to me once I have used it in to do something real.

The trick, then, is to find a way of using that exciting new technology you desperately want to learn…

Use it in your Day Job

Where do you do most of your development? Exactly. So if you want to learn AwesomeNewFramework then consider whether or not it would be of use to your company: perhaps it adds something that you can’t do today, or improves on the processes you currently use.

Obviously this is not always possible. Almost all software companies will have established technologies and established products, so changing to a new framework or language is not always practical. People have to be trained, products have to be updated…it might not be worth the effort for an uncertain benefit.

So if we assume that we can’t use it at work, how can we find another project to learn with?

Have an Idea

Working on personal projects or side projects is – in my opinion – always a good idea for any developer. Doing all of your work in a single project or in a single language is a recipe for stagnation.

If you have a great idea for a project and you want to learn a new technology then it is a double benefit. By having a real-world problem to solve you will immediately be forced to look deeper into the technology than you would during any tutorial or course. If someone is walking you through a prepared problem then the information is just handed to you: you may learn how to use function X(...), but you likely won’t learn why you should use it over Z(...), what happens when you leave out the optional parameters, and why the bloody thing won’t work when you need it to!

When you are trying to solve a specific problem you almost-inevitably find a deeper understanding of the code on which it relies.

For me, side projects are always my preferred way of learning. A personal project has no existing structure to confuse or to be misunderstood; it has no limits on what it does or how it works besides those that you decide. Once complete, you – the author – know the story behind every line of code and every design choice.

The only problem is that it does rely on having an idea. Coming up with ideas that are both useful and will not take too long to create is often a sticking point, so how can we come up with real-world scenarios from which to learn without that creative spark?

Solve Someone Else’s Problem

A great way to learn is to teach someone else, and one of the many great things about the internet is that it is full of people who want to be taught!

If you are looking to improve your knowledge of a technology but you don’t have the time to take on a whole project, take a look on Stack Overflow. You’ll find a long list of other people who ask a constant stream of questions – from beginner to advanced levels – about the framework or the language you want to learn.

Some of those questions will be beyond your knowledge; some you will be able to answer immediately. In either case, try to write an answer.

It doesn’t matter if there are already answers, or if you think you might need to go and investigate for 15 minutes before you can respond: by finding a solution and then explaining that solution to someone else you will automatically be improving your own knowledge. As an added bonus, you might have helped another poor soul on their way to understanding as well!

Wrapping Up

In summary, you will always learn more by tackling real-world problems rather than hand-picked scenarios from a tutorial. Ideally you want to use your own problems, but if you don’t have access to the right kind of project just now then go help someone else with theirs!

Selenium: Early Thoughts on Test Automation

I have recently been running a trial of Selenium to automate some of our regression and integration testing. I have only been looking into this for a short amount of time so I am by no means an expert but this post contains a few of my observations so far.

For those of you that are not familiar with it, Selenium is a browser automation system that allows you to write integration tests to control a browser and check the response of your site. An example of a Selenium script might look like this:

  1. Open the browser
  2. Browse to the login page
  3. Enter “user 1″ in the input with ID #username
  4. Enter “pa$$word” in input with ID #password
  5. Click the Login button and wait for the page to load
  6. Check that the browser has navigated to the users home page

Selenium as a framework comes in 2 flavours: IDE & WebDriver.

Selenium IDE

IDE uses a record-and-playback system to define the script and to run the tests. It is implemented as a FireFox plugin and is therefore limited to FireFox only.

We had run a previous trial using this version where we attempted to have our QA team record and execute scripts as part of functional and regression testing. We found that this had a number of problems and eventually abandoned the trial:

  • Limited to FireFox
  • Has to be run manually (i.e. Cannot be run automatically on a build server)
  • Often requires some basic understanding of JavaScript or CSS selectors to work through a problem in a script; this was sometimes beyond the technical knowledge of our QA team
  • Automatically-generated selectors are often extremely fragile. Instead of input#password, it might generate body > div.main-content > form > input:last-child. This meant that a lot of time was lost to maintenance and that the vast majority of “errors” reported by the script were actually incorrect selectors.

We decided that there we too many disadvantages with this option and so moved onto Selenium WebDriver.

Selenium WebDriver

WebDriver requires that all scripts are written in the programming language of your choice. This forced the script-writing task onto our development team instead of QA, but also meant that development best-practices could be employed to improve the quality and maintainability of the scripts.

This version of Selenium also (crucially) supports multiple browsers and can be run as part of an automated nightly build so seemed like a much better fit.

Whilst writing our first few Selenium tests we came up with a few thoughts on the structure

Use a Base Fixture for Multiple Browser Testing

This is a nice simple one – we did not want to write duplicate tests for all browsers so we made use of the Generic Test Fixture nUnit feature to automatically run our tests in the 4 browsers in which we were interested.

We created a generic base fixture class for all our tests and decorated it with the TestFixture<TDriver> attribute. This instructs nUnit to instantiate and run the class for each of the specified generic types, which in turn means any test we write in such a fixture will automatically be run against each browser

[TestFixture(typeof(ChromeDriver))]
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(FirefoxDriver))]
public abstract class SeleniumTestFixtureBase<TWebDriver>
	where TWebDriver : IWebDriver
{
	protected IWebDriver Driver { get; private set; }

	[SetUp]
	public void CreateDriver()
	{
		this.Driver = DriverFactory.Instance
			.CreateWebDriver<TWebDriver>();
			
		//...
	}
}

This does have some disadvantages when it comes to debugging tests as there are always 4 tests with the same method name but this has only been a minor inconvenience so far – the browser can be determined from the fixture class name where needed.

Wrap Selectors in a “Page” Object

The biggest problem with our initial trial of “record and playback” automated tests was the fragility of our selectors. Tests would regularly fail when manual testing would demonstrate the feature clearly working, and this was almost always due to a subtle change in the DOM structure.

If your first reaction to a failing test is to say “the test is probably broken” then your tests are useless!

A part of the cause was that the “record” part of the feature does not always select the most sensible selector to identify the element. We assumed that by hand-picking selectors we would automatically improve the robustness (is that a word?) of our selectors, but in the case where they did change we still did not want to have to update a lot of places. Similarly, we did not want to have to work out what a selector was trying to identify when debugging tests.

Our solution to this was to create a “Page” object to wrap the selectors for each page on the site in meaningfully named methods. For example, our LoginPage class might look like this:

public class LoginPage
{
	private IWebDriver _driver;

	public LoginPage(IWebDriver driver)
	{
		_driver = driver;
	}

	public IWebElement UsernameInput()
	{
		return _driver.FindElement(By.CssSelector("#userName"));
	}

	public IWebElement PasswordInput()
	{
		return _driver.FindElement(By.CssSelector("#Password"));
	}
}

This has a number of advantages:

  • Single definition of the selector for a given DOM element
    We only ever define each element once
  • Page inheritance
    We can create base pages that expose page elements which appear on multiple pages (e.g. the main navigation or the user settings menu)
  • Creating helper methods
    When we repeat blocks of functionality (e.g. Enter [usename], enter [password] then click Submit) we are able to encapsulate them on the Page class instead of private methods within the test fixture.

We also created factory extension methods on the IWebDriver element to improve readability

public static class LoginPageFactory
{
	public static LoginPage LoginPage(this IWebDriver driver)
	{
		return new LoginPage(driver);
	}
}

//...
this.Driver.LoginPage().UsernameInput().Click()

Storing Environment Information

We decided to store our environmental variables in code to improve reuse and readability. This is only a minor point but we did not want to have any URLs, usernames or configuration options hard coded in the tests.

We structured our data so we could reference variables as below:

TestEnvironment.Users.AdminUsers[0].Username

Switching between Debug & Release Mode

By storing environment variables in code we created another problem: how to switch between running against the test environment and against the local developer environment.

We solved this by loading certain changeable elements of our configuration from .config files based on a #DEBUG flag

Other Gotchas

  • The 64bit IE driver for Selenium IDE is incredibly slow! Uninstall it and install the 32-bit one
  • Browser locale can – in most cases – be set using a flag when creating the driver. One exception to this is Safari for Windows, which does not seem to allow you to change the locale at all – even through Safari itself!

Summary

We are still in the early phases of this trial but it is looking like we will be able to make Selenium automation a significant part of our testing strategy going forward.

Hopefully these will help out other people. If you have any suggestions of your own then leave them in the comments on message me on Twitter (@stevegreatrex).

Chrome Dev Tools & Inline Dynamic JavaScript

If you are using Chrome dev tools to debug your application then you might have come across this situation.  If you dynamically load some content, and that content contains an inline <script> tag, then annoyingly you can’t see that script under Source in the developer console.

Thankfully there’s a nice simple solution to the problem: insert the following tag at the end of your inline script:

<script>
    //...
    //@ sourceURL=MyInlineScript.js
</script>

This will make the script appear in the Sources list under the “No Domain” section:

InlineScript

Remember that if the inline script is part of a Razor view then you will need to escape the @

<script>
    //...
    //@@ sourceURL=MyInlineScript.js
</script>

Are Your Users “Sure”?

Are You Sure?

You’ve probably been asked the question a hundred times already today: are you sure you want to do that?

Cancel

Are you sure you want to delete that file?  Are you sure you want to log out?  To change that extension?  To update this setting?  Are you sure about anything any more?

One of my pet hates is working with an application that is constantly doubting me.  “Yes I’m sure – that’s why I clicked the button!”

Based on personal experience and absolutely no real data, I estimate that I answer “no” to that question…once a week?  A month?  Not often, certainly, and yet over and over again I have to keep expressing my certainty about every little decision.

Just Stop Asking

I’ve been working on a new version of an existing web application recently and one of the key UX decisions has been to stop asking unnecessary questions.  If the user says “jump” then don’t ask if they’re sure; don’t even ask them “how high”…just do it!  We want to trust our users to know what they are doing – wherever possible, we want to do the most obvious thing first and ask questions later.

But What About…

Ah, yes, good point.  Quite often, users don’t know what they’re doing.  And quite often, they’re going to do something wrong.  This, presumably, is the reason that we are constantly asked if we really want to do something: if we complain later then at least the developer can say “well you did say you were sure…”

So how can we account for the (hopefully rare) mistakes and still not get in the way of the average user?

Our approach is to make everything undoable.  Make the consequences of even the serious-sounding actions (“delete this forever?”) recoverable if they happen by accident.  In fact…

Make it as difficult as possible to seriously cock things up

When the user deletes something, give them a little notification saying “Great, that’s gone… unless you click this: [Undo]”.  It doesn’t need to be a big notification – just something that they’ll notice if they’re sat in a cold sweat thinking they’ve just thrown away the last 4 hours’ worth of work.

Undo Popup

Promote Exploration

There’s another up-side to this approach – it encourages users to play around and explore.  If they are confident that they cannot accidentally break something permanently then – hopefully – they will be happier to try something and see what happens.

In a lot of applications, “fear of breaking it” can be a pretty serious barrier to adoption.  It makes sense as well – if someone is not too computer-literate then they probably should be worried about doing something wrong.  But if, when they accidentally delete next months payroll, they have a big friendly button saying “don’t worry – just click here and everything will be back to normal” then you hope that they will worry less about clicking that button next time.

Keep IIS Express Running in Visual Studio 2013

Since upgrading to Visual Studio 2013 I’ve noticed a change in behaviour in IIS.  It still starts up when you start debugging a web project – same as it always has – but since the upgrade it automatically shuts down when you stop debugging.

As with most IDE behaviour, I was pretty familiar with the old way and so I found it incredibly frustrating whenever this happened.  The good news is that there’s a very simple solution: disable Edit and Continue for the project in the Properties dialog.

edit continue

Hopefully this will save someone else some pain!

3 Ways to Deal with SFOUC in KnockoutJS

What is SFOUC?

A Sudden Flash Of Unstyled Content, or SFOUC, refers to that irritating few milliseconds between when your web page loads and when all of your dynamic content pops into place.

The reason for this annoying phenomenon is that your HTML is being rendered a split second before your CSS & JavaScript files have finished downloading and running.  The un-augmented HTML sits there just long enough to catch the eye of your user, and then is whisked away as soon as all the lovely dynamic content is ready to replace it.

I’ve found that this is particularly evident when working with KnockoutJS because generally the last line of your startup code is ko.applyBindings(viewModel), and until you bind the view model you always see the unstyled content.

So how can we stop this menace?

Option 1: The “Classic” Approach

KnockoutJS is just a library on top of JavaScript so there is no reason why we can’t use the same approach as is suggested for non-Knockout apps.

There are a few flavours to this, but as a general solution:

  1. Add a class to the body element (e.g. no-js)
  2. Add a class to the dynamic content (e.g. dynamic-content)
  3. Add a style that hides dynamic-content within no-js

    .no-js .dynamic-content { display: none; }
    
  4. Add some JavaScript to remove the no-js class from the body once everything is loaded

    document.body.className = 
        document.body.className.replace("no-js", "");
    

This will do the job quite nicely, but still relies on the CSS loading quickly enough to hide the dynamic content before anyone notices.

Option 2: The “Visible” Approach

If the idea is to reduce the amount of time the basic HTML is visible then the fastest place to put the “don’t show this” instruction is in the HTML itself – using a style="display:none" tag.

The problem with this approach is that you then have to to go through and remove all of those styles once your JavaScript is ready to run; this is where the visible binding can help us out.

Under the covers, the visible binding simply sets the display style to none or to "" (i.e. not set) so we can use this to automatically remove the style tags created above:

<div style="display:none" data-bind="visible: true"> </div>
<!-- or -->
<div style="display:none" data-bind="visible: $data"> </div>

The first of these examples will display the div as soon as Knockout is bound; the second will display as soon as the current context has a truthy value.

Option 3: The “Make-Everything-A-Template” Approach

Instead of trying to hide rendered HTML content until we’re ready, what if we just stopped rendering it all together?  If we put all of our dynamic content inside script tags (which obviously won’t be rendered by the browser) then we can use the template binding to render the content from inside the script tags in the context of a view model.

<div data-bind="template: 'dynamic-content'">
    <!-- will appear empty on page load -->
</div>

<script id="dynamic-content" type="text/html">
    <h1>This will be rendered by Knockout</h1>
</script>

This makes for slightly more verbose markup than the other approaches but we can improve on this by switching to the containerless syntax:

<!-- ko template: "dynamic-content" --><!-- /ko -->
<script id="dynamic-content" type="text/html">
    <h1>This will be rendered by Knockout</h1>
</script>

A Note about Progressive Enhancement

Obviously if you want to employ progressive enhancement then these approaches will not work for you – they work by hiding the content until it is ready to be rendered.  If you are using progressive enhancement then I’m going to assume that the original HTML renders nicely anyway and that you don’t care about the SFOUC problem!