From Spec to Test Suite in Common Lisp: Mustache

Just a quick write-up for the holidays.

In this article, I will walk you through writing a test suite in Common Lisp based upon a specification of the software in question. The software we’ll be looking at is the excellent mustache.

This is a fairly basic Lisp article, but I will be glossing over a lot of the details of the code I’ve written – partially because it should be fairly intuitive, partially because I have no idea how idiomatic it is. Take everything with a grain of salt, and enjoy the results.

The Basics

Mustache was written as an example of ‘logicless’ templating: providing the bare minimum of functionality needed to create template documents that can be interpolated with data. In this scenario, ‘bare minimum’ is a complement: an explicit design decision to prevent spaghetti code in templates. Mustache templates essentially consist of three primitive constructs:

  • The data construct: if data exists with the name given in the token, replace the token with the data.
  • The loop construct: if the name points to data that is ‘list-like’, render this part of the template for each element in the list.
  • The inverted construct: if the name points to data that is non-existent, or ‘false-like’, render something.

Nearly all features that Mustache provides can be narrowed down into one of these categories. It’s elegantly simple, but implementations, depending on their language, may vary in difficulty to express.

I think it’s interesting that if you ignore that Mustache was designed for ‘hash table’-like contexts, Common Lisp already provides a complete implementation of Mustache with its FORMAT directives, specifically the aesthetic, iteration, and conditional expression directives. Right out of the gate you can do most of what you’d want to do in Mustache, as long as you use lists instead of hash tables, and get used to the esoteric syntax. But this isn’t good enough for us: we want a rigorous implementation of the Mustache language, and for that, we need to test our implementation against the spec. And here’s where it gets interesting.

Mustache’s spec was written in YAML, and is also provided as JSON, making it machine-parseable. It is divided up into files. Each file is a discrete section of the spec. Each file contains an overview describing the file, and tests. Each test contains a name, description, context data, template, and expected result.

Using this information we can construct an automated test suite: one that provides our implementation as an input, uses the context and template in each test case, and compares it against the expected result. This is absolutely by design and a wonderful thought on the part of the Mustache ‘working group’ (for want of a term to describe the various contributors to the language).

Parsing the Spec

First things first. Let’s grab a copy of the spec.

$ git clone git:// ~/Projects/mustache.spec/

Pop open an REPL. I’m going to load the libraries I know we’ll be using, in advance, as well as some helper variables and functions:

(ql:quickload '(fiveam cl-json cl-who))

;; courtesy
(defun walk-directory (directory pattern)
  (directory (merge-pathnames pattern directory)))

(setq *spec-directory* #P"~/Projects/mustache.spec/")

(defun utf8-json-decode (pathname)
  (with-open-file (stream pathname
                          :direction :input
                          :external-format :utf-8)
    (json:decode-json-from-source stream)))

These forms should be fairly self-explanatory. We provide a helper function to glob over the spec files we want, set the spec directory to a top-level name, and mix our JSON parsing function with a helper that ensures the input stream is UTF-8.

So now:

CL-USER> (walk-directory #P"~/Projects/mustache.spec/" "specs/*.json")


(setq *all-specs* 
      (mapcar #'utf8-json-decode (walk-directory *spec-directory* "specs/*.json")))

Let’s confirm each loaded file in the spec has the same basic structure.

CL-USER> (mapcar (lambda (x) (mapcar #'car x)) *all-specs*)



Our Implementation

… sucks. No really. All it does is return the template, un-interpolated.

(defun mustache-render (template data)

But this will work for our purposes. All we need is something we can pass the arguments into, and get a result. It doesn’t have to be the right result, just yet.

The Test Suite

FiveAM is my go-to unit test library for Common Lisp. It’s simple, elegant, and designed to provide test results in a format that can easily be transformed for any purpose.

What we’d like to do is generate this test suite by iterating over each test in the spec, and creating a unit test for it.

To give you an idea of the basic structure of a test, here’s one of the imported tests from the spec.

;; courtesy
(defun random-element (list)
  "Return some element of the list, chosen at random."
  (nth (random (length list)) list))

CL-USER> (random-element (cdr (assoc :tests (random-element *all-specs*))))

((:NAME . "Falsey") (:DATA (:BOOLEAN))
 (:EXPECTED . "\"This should be rendered.\"")
 (:TEMPLATE . "\"{{^boolean}}This should be rendered.{{/boolean}}\"")
 (:DESC . "Falsey sections should have their contents rendered."))

Versus its counterpart in the YAML spec:

- name: Falsey
   desc: Falsey sections should have their contents rendered.
   data: { boolean: false }
   template: '"{{^boolean}}This should be rendered.{{/boolean}}"'
   expected: '"This should be rendered."'

It’s hard to see from this example, but our JSON importer turned the data context into an association list, which we should use in the implementation as the type for our context argument. In this case, (cdr (assoc :boolean (cdr (assoc :data test)))) would return nil, a ‘falsey’ value.

So for each spec, we have a bunch of tests. For each test, we want to make a unit test in our test suite. Simple enough.

(fiveam:def-suite :mustache-specs)
(fiveam:in-suite :mustache-specs)

(loop for spec in *all-specs*
   do (loop for test in (cdr (assoc :tests spec))
         do (let ((name (cdr (assoc :name test)))
                  (desc (cdr (assoc :desc test)))
                  (data (cdr (assoc :data test)))
                  (template (cdr (assoc :template test)))
                  (expected (cdr (assoc :expected test))))
              (fiveam:test name
                (fiveam:is (string= expected (mustache-render template data)))))))

Try it out.

CL-USER> (fiveam:run :mustache-specs)

; in: LAMBDA ()
; caught WARNING:
;   undefined variable: DATA

;     (LAMBDA ()
;       DESC
; caught WARNING:
;   undefined variable: DESC

; ==>
;     (IF (PROGN (STRING= #:E-0 #:A-1))
;                                     '(STRING= EXPECTED
;                                               (MUSTACHE-RENDER TEMPLATE DATA)))
;                                          (FORMAT NIL
;                                                  "~S evaluated to ~S, which is not ~S to ~S."
;                                                  '(MUSTACHE-RENDER TEMPLATE
;                                                                    DATA)
;                                                  #:A-1 'STRING= #:E-0)
;                                          :TEST-EXPR
;                                          '(STRING= EXPECTED
;                                                    (MUSTACHE-RENDER TEMPLATE
;                                                                     DATA)))))
; caught WARNING:
;   undefined variable: EXPECTED

; caught WARNING:
;   undefined variable: TEMPLATE
; compilation unit finished
;   Undefined variables:
;   caught 4 WARNING conditions

What the hell happened? Why is there only one test? Why were all those variables in the loop considered undefined?

Well. FiveAM’s test form is a macro. It is evaluated and expanded before the rest of the code, and, in this case, evaluated when none of the variables used in it are actually bound to a value. This means that for every test in the spec, we created a test called ‘name’, instead of a test called whatever the ‘name’ variable pointed to. So we are out of luck, in terms of this approach.

But it doesn’t mean we’re out of luck, period. Knowing that test is a macro, we can reformulate our problem. We don’t want to iterate over the specs and tests, and create test cases for each one. We want to write a macro which expands into code which does that. And we can.

(fiveam:def-suite :mustache-specs)      ; redefining a test suite empties it
(fiveam:in-suite :mustache-specs)

(defmacro mustache-spawn-test-suite (specs)
         for spec in (eval specs)
         append (loop
               for test in (cdr (assoc :tests spec))
               for name = (cdr (assoc :name test))
               for template = (cdr (assoc :template test))
               for data = (cdr (assoc :data test))
               for expected = (cdr (assoc :expected test))
               for desc = (cdr (assoc :desc test))
               collect `(fiveam:test ,(intern name) ,desc
                          (fiveam:is (string= ,expected (mustache-render ,template ,data))))))))

Our macro doesn’t look too different from our first attempt, but what it does is something quite wonderful.

CL-USER> (macroexpand '(mustache-spawn-test-suite *all-specs*))

   "Comment blocks should be removed from the template."
    (STRING= "1234567890"
             (MUSTACHE-RENDER "12345{{! Comment Block! }}67890" NIL))))
   "Multiline comments should be permitted."
    (STRING= "1234567890
             (MUSTACHE-RENDER "12345{{!
  This is a
  multi-line comment...
  ;; ...

By writing a macro, we’ve written a tiny amount of code which generates a lot of code. This code does precisely what we want: iterates over the list of tests in the Mustache spec, and creates a test suite for each and every one of them.

(Lispers frown on the use of eval as above. Can you rewrite the macro to avoid its use?)

Now we can execute the macro, passing in our list of specifications, and generate the test suite that we really want.

(mustache-spawn-test-suite *all-specs*)

Reporting Results

When we use fiveam:run! to run the test suite, we get back results in a very nice printed format. This works well if you’re at the REPL, but what if you wanted something to display to the world? Something half-continuous integration, half-implementation progress bar? Using fiveam:run and the excellent CL-WHO, we can do just that.

First, run the tests.

(setq *results* (fiveam:run :mustache-specs))

Then, let’s provide a simple transformation of the test results.

(defun pretty-result (test-result)
  (flet ((result-type (result) (format nil "~(~A~)" (symbol-name (type-of result)))))
    (let ((test-case (fiveam::test-case test-result)))
      (list (symbol-name (fiveam::name test-case))
            (fiveam::description test-case)
            (result-type test-result)))))

;; then, try it out:
CL-USER> (pretty-result (nth 0 *results*))
("Inline" "Comment blocks should be removed from the template." "test-failure")

Perfect. Running pretty-result on one of the test results produces a simple list consisting of the name of the test, the description of the test, and a token representing a passed/failed/skipped test.

Now just wrap it in some CL-WHO, fire up Emacs’ httpd-server, and navigate over to the generated HTML file.

(with-open-file (stream #P"~/public_html/results.html" :direction :output :if-exists :supersede)
  (cl-who:with-html-output (stream)
    (:style :type "text/css" 
            ".test-passed { background-color: #0f0; }"
            ".test-failure { background-color: #f00; }"
            ".unexpected-test-failure { background-color: #ff0; }")
        for (name description result) in (mapcar #'pretty-result *results*)
        do (cl-who:htm 
             (:td (cl-who:fmt name))
             (:td (cl-who:fmt description))
             (:td :class result (cl-who:fmt result))))))))


Pretty impressive, yeah? Fifty source lines of code from start to finish, including our non-existent implementation of mustache-render. At any point that the definition of mustache-render changes, all you need to do to re-generate the test suite results is re-run (setq *results* (fiveam:run :mustache-specs)) and then the above snippet of CL-WHO-infused Lisp.

And there you have it! The next step, of course, is to write a Lisp package that meets the spec, and causes all those red table cells in the generated output to turn green. But, as a wise computer science book once asserted, “this is left as an exercise to the reader.”