JavaScript: executable specs and code samples

28 Aug 2013

I was talking with a few people recently about documenting public libraries & APIs on Github. I think we all agreed on the following:

Having up-to-date API specs seems like a good fit for BDD, just maybe not in the traditional sense. In our case, the docs are aimed at developers (not SMEs), and we actually want to see some code.

What would be great is to generate readable docs from our unit test specifications. This would get the best of both worlds, and still ensure the code samples are always correct! Of course we'll need a set of tests specifically written for that purpose. Ideally they'll be independant, and each focus on a single responsability. As opposed to standard unit tests, they won't necesarily have to be at the unit level (if we're demonstrating higher level features), and won't always require assertions. The most important thing is that the library code under test stands out, so we'll have to keep extracting any boilerplate / setup code.

Interwoven docs and unit tests is actually very close to the approach taken by Go for the official documentation. After some investigation, here's two options that seem to work well for Node / JavaScript projects. They each have pros and cons, so I guess it's mostly a matter of preference!

Literate CoffeeScript: markdown with real tests inside

If you don't mind writing CoffeeScript, check out literate coffeescript. It allows you to mix Markdown syntax with executable CoffeeScript. Simply start a normal markdown document, and indent executable code with 4 spaces. The .litcoffee files can be loaded by Mocha without any problems, so you can write code samples like:

    should = require 'should'
    _ = require 'underscore'

    describe 'Code samples', ->

## _.max(list, iterator)

Returns the maximum value in the list

- either comparing item values themselves

        it 'returns the maximum value', ->
            _.max([2, 3, 1]).should.eql 3

- or using an iterator to generate the criterion

        it 'uses a custom iterator to compare objects', ->
            iterator = (item) -> item.a
            _.max([{a:2}, {a:3}, {a:1}], iterator).should.eql {a:3}

Running Mocha gives the following output:

․․
2 tests complete (1 ms)

GitHub doesn't pick up the .litcoffee extension yet, but a quick rename to .md at build-time and it will happily render the following:

Literate coffeescript result

Pros

Cons

Mocha reporter: unit tests with real markdown inside

Another option is to use standard unit tests, and use a custom reporter to generate Markdown from the spec descriptions. Turns out Mocha comes with one of these out of the box, courtesy of TJ.

mocha --reporter markdown --grep "Code samples" > output.md

This is what our code sample now looks like:

var should = require('should');
var _ = require('underscore');

describe('Code samples', function() {

    describe('_.max(list, iterator)', function() {

        it('\nReturns the maximum value in the list    ' +
           '\n- either comparing item values themselves',
           function() {
               _.max([2, 3, 1]).should.eql(3);
           }
        );

        it('\n- or using an iterator to generate the criterion',
           function() {
               var iterator = function (item) { return item.a; }
               _.max([{a:2}, {a:3}, {a:1}], iterator).should.eql({a:3});
           }
        );

    });

});

You noticed that I switched back to JavaScript. Unfortunately, CoffeeScript would have been a lot nicer to use, especially with its multi-line string syntax ("""). However, the reporter only has access to the compiled JavaScript version, which means any stylistic formatting or language idioms would be lost. This doesn't matter much for the contrived case above, but can make our code impossible to read for anything more complex. Maybe that's not a bad thing, since everyone might not know, or like (gasp!) CoffeeScript.

Of course, being valid JavaScript, this runs as expected:

․․
2 tests complete (1 ms)

And the generated Markdown file looks like:

Mocha markdown result

Pros

Cons

To the README!

To include these executable code sample as part of the README, we'll need a custom build step. It could be a token-delimited area that gets replaced automatically... Or we can upvote this Github issue to allow embedding of separate Markdown files Smiley

In the meantime, GitHub supports relative links:

[check our the code samples here](code-samples.md)

That's it! Nice looking code samples or API docs that never go out of date... unless you like your build red. Which version did you prefer? Or do you use another alternative?

Comments