Instrumenting Java and Groovy Code with JMX (Wonderful WebFlow Part IV)

This is the fourth post in a series. it may be A Good Idea to take a look at the earlier posts as well…You Have Been Warned!

All substantial applications require management and monitoring; in the Java world, JMX is the standard technology.

I have instrumented the Calc WebFlow to maintain two JMX-capable counters (MBeans): total number of flows created since application startup and instantaneous count of flows actually active. Both of these counters (actually, two instances of the same class) are injected into the controller, as this excerpt shows:

class CalcController {

  def totalFlowsCreatedSequenceMBean
  def instantaneousFlowCountMBean

  def calcFlow = {
    startup {
      action() {
        flow.flowCreatedSequence = totalFlowsCreatedSequenceMBean.increment()
        log.debug "calcFlow startup; this is flow #${flow.flowCreatedSequence}; instantaneous flow count: ${instantaneousFlowCountMBean.increment()}"
      on('success').to 'init'

    shutdown {
      action() {
        log.debug "calcFlow shutdown; this is flow #${flow.flowCreatedSequence}; instantaneous flow count: ${instantaneousFlowCountMBean.decrement()}"
      on('success').to 'results'

Although there is a Grails plugin for JMX and a Groovy JMX DSL and a GroovyMBean class, configuring JMX is a fairly trivial task, so I'm going to do it "by hand."

In Grails, standard spring-oriented configuration is done using the Spring Beans DSL in the file conf/resources.groovy:

import org.springframework.jmx.export.MBeanExporter
import org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource
import org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler

// Place your Spring DSL code here
beans = {
  // application-level counters

   // JMX infrastructure configuration
   mbeanServer(MBeanServerFactoryBean) {
       locateExistingServerIfPossible = true
   assemblr(MetadataMBeanInfoAssembler) {
       attributeSource = attributeSrc
   exporter(MBeanExporter) {
       server = mbeanServer
       assembler = assemblr
       autodetect = true
       beans = ["calc.jmx:counter=totalFlowsCreatedSequenceMBean": totalFlowsCreatedSequenceMBean,
                "calc.jmx:counter=instantaneousFlowCountMBean": instantaneousFlowCountMBean]

You should be able to see how the 'totalFlowsCreatedSequenceMBean' and 'instantaneousFlowCountMBean' bean instances are created by the underlying Spring infrastructure and then injected into the Calc controller (this very powerful behaviour is "autowiring by name", in Spring parlance).

On to the actual JMX MBean. This is written in Groovy (in the directory src/groovy/calc) but I have remained fairly true to the spirit of Java (methods, not closures, for example) to be sure that JMX doesn't get too 'confused':

package calc

import org.apache.log4j.Logger
import org.springframework.jmx.export.annotation.ManagedAttribute
import org.springframework.jmx.export.annotation.ManagedOperation
import org.springframework.jmx.export.annotation.ManagedResource

@ManagedResource (description = "A simple Counter MBean")
class CounterMBean {
    private static final Logger log = Logger.getLogger(CounterMBean)
    private int value

    CounterMBean() {
      value = 0;
      log.debug 'CounterMBean constructed; initial value: $value'

    @ManagedAttribute (description = "Retrieve the current value of the Counter")
    public synchronized int getValue() {
        return value

    @ManagedOperation (description = "Bump up the Counter by 1; return new value for Counter")
    public synchronized int increment() {
        return ++value

    @ManagedOperation (description = "Reduce the Counter by 1; return new value for Counter")
    public synchronized int decrement() {
        return --value

  @ManagedOperation (description = "Reset the Counter to 0")
  public synchronized int reset() {
      value = 0
      return value

This MBean specifies a single attribute: 'value' and a number of operations: 'increment', 'decrement' and 'reset.' These are available to both the actual using application and to the management infrastructure.

One key point here (often overlooked, however) is that all the methods must be synchronized to prevent strange and wonderful race conditions.

Notice how operations and attributes are configured and exported via Java attributes. If you look back to the Spring configuration shown earlier, you will see the use of 'AnnotationJmxAttributeSource' to pick up and export appropriately annotated classes.

It is easy to see how this all comes together by starting up jconsole and looking for the calc.jmx ObjectName:

Adding JMX into the mix is so simple for any Spring-based application (and Grails is Spring-based, of course) that there is almost no excuse for not adding this level of monitor-ability and manage-ability to an application.

What are you waiting for? Go to it!

Tags: Grails, Programming

Load Testing (Wonderful WebFlow Part III)

This posting follows on from Testing WebFlow (Wonderful Webflow, Part II). It's probably best to take a look at those first…

Ahhh Load Testing! The very name strikes fear into the heart.

I have seen projects crash and burn and been frantically resurrected as a result of poor performance that only came to light at the last minute. I have seen projects scrabble to find the appropriate incantations (read: JVM options) that will encourage the performance faeries to sprinkle their magic dust on the system so that the project can "go live." I have seen angst, blood, sweat and tears as year-long projects fail to cross that "one final hurdle" on the path to deployment.

I have often wondered: why is Load Testing always left so late? I have talked to people that profess to practise XP development who still leave it to the latter stages of the project, even though creating a Potentially Shippable Product is supposed to be a primary goal for an iteration.

There seem to be two underlying reasons for this state of affairs. The first reason is cost. Tools like HP LoadRunner are not cheap, plus it costs a lot to engage a load testing specialist to do the work (presumably faery food is rare and expensive…) so it has to be done only when things are "finally ready." The second reason goes like this: "since the system isn't finished yet, the figures won't mean anything, so there's no point in doing it." Here is one such statement: "Running performance tests in a development environment that differs from your production environment can often be a misleading and misdirected effort.".

Both reasons are fallacious. Load Testing need not be an expensive activity, especially when FOSS such as JMeter are available. The idea that performance figures are the only product of load testing is also wrong: load testing can show weaknesses in the overall application architecture, or how the application is to be managed, it can highlight a wrong choice of JVM settings or even of actual JVM (until recently my mantra was: "thou shalt use JRockit for server-side applications." With their recent takeover of BEA, Oracle has unfortunately put the kaibosh on that [BOO!] so I'm not even going to link to JRockit's home page). Load testing can show weaknesses in choice of tool, algorithm, technique or team. The actual figures that emerge although often considered hightly important, may actually be the least important outcome.

IMHO one should aim for Load Testing to be done for each iteration of Potentially Shippable Product so that no nasty surprises pop up at the last moment. Given a sophisticated enough scripted CI (JMeter Plugin for Hudson exists); CI is supposed to drive software quality, after all.

For really quick and dirty performance tests, there's Apache ab or curl or even wget, so there is really no excuse for not knowing something about performance. She'll be right mate just won't cut it!

Enough pontification! Let's take a look at how to use Apche JMeter with our Calc WebFlow application.

Why JMeter? It's FOSS, powerful and you gotta love a tool whose manual has a chapter entitled 17. Help! My boss wants me to load test our web app! :-)
Why not JMeter? "…not every company is willing to stop putting out huge sums of money for Mercury." Support for some edge cases involving JavaScript is the main lack and this can be overcome by using badboy (good Ozzie software!) in conjunction with JMeter.

JMeter works in two modes: as a recording proxy for your browser and then as a playback device for the recording made previously.

Laziness is a virtue, so I am not going to show each step of the process! There are.Plenty.of.Resources,.already.

I will add a few generally-useful and WebFlow-specific tidbits, however.

If you have Vista and use IE you will find it dificult/impossible to proxy for anything on localhost/ Solutions include: binding your server to a loopback adapter, using another browser or using a separate server machine. This is a general issue, not something specific to JMeter.

Before playback, for clarity rename each request in the recorded interactions, otherwise the various reports and graphs become very confusing:

For the actual Load Test playback to work correctly, JMeter needs to invoke a couple of helper functions: the Cookie Manager and a Regular Expression Extractor. While the need for the former tool is probably quite clear, the requirement for/use of the latter needs explanation. JMeter cannot simply replay the sequence as recorded. Recall that WebFlow allocates a unique one-time per-flow value for each flow that is stored in the '_flowExecutionKey' hidden field/request parameter. Although JMeter stores this, it cannot simply reuse the recorded value on replay, and instead must extract and use the value allocated afresh for each flow execution. This is the task of the Regular Expression Extractor: look through the recorded sequence for a specified pattern that can then be saved into a variable.

Once recorded, each script must be 'massaged' to make use of the variable that the Regular Expression Extractor will maintain, rather than use the recorded value directly (it is in this area that a tool like LoadRunner might make life easier: it has an 'auto-correlate' ability that simplifies this task). Care is needed here and many references on the 'net are wrong or a bit out of date:

To be clear, here are the appropriate values:

Reference Name: flowExecutionKey
Regular Expression: name="_flowExecutionKey" value="(.*?)"
Template: $1$
Match No.: 0

And here is a 'massaged' request entry, showing how the 'flowExecutionKey' variable is used:

As is desirable, JMeter produces nice graphs and reports, as this pair of screenshots shows:

JMeter has a wide range of elements that can be added to a test plan: the "View Results Tree" Listener is useful for helping out with development/debugging of a test plan. As the manual says: "The View Results Tree shows a tree of all sample responses, allowing you to view the response for any sample. In addition to showing the response, you can see the time it took to get this response, and some response codes." Very useful:

(As an aside, note how this results tree shows how WebFlow automatically implements the "Redirect after POST" technique…nice!)

The "Gaussian Random Timer" is another useful test plan element. Rather than issuing requests as quickly as possible, with no "think time" in between, the Gaussian Random Timer makes JMeter simulate the timing between hits more realistically. This gives a better indication of how a system might be expected to perform in the "normal situation" but is probably not so hot at predicting performance at extreme load.

There are many other test plan elements, you should take a look at the user manual to see just what other elements JMeter makes available.

Of course, JMeter is controllable from gant, as this small example shows:

dirJMeterHome = "${DEVTOOLS}/jakarta-jmeter-2.3.2"

dirGantHome ="environment.GANT_HOME"

includeTargets << gant.targets.Clean
cleanDirectory << [jmeterResultsDir]

ant.path(id: 'pathJMeter') {
  fileset(dir: dirJMeterHome + '/lib', includes: '*.jar')

ant.path(id: 'pathJMeterAntTask') {
  fileset(dir: dirJMeterHome + '/extras', includes: 'ant-jmeter-*.jar')

ant.taskdef(name: 'jmeter', classname: 'org.programmerplanet.ant.taskdefs.jmeter.JMeterTask',
            classpathref: 'pathJMeterAntTask')

ant.path(id: 'pathGroovy') {
  fileset(dir: dirGantHome + '/lib', includes: '*.jar')

target(init: 'Initialise the build, given a clean start') {
  ant.mkdir(dir: jmeterResultsDir)

target (reformatReport: 'Do XSLT Magic') {
    ant.xslt(in: "${jmeterResultsJtl}", out:"${jmeterResultsHtml}", style: "${jmeterResultsXsl}")

target(runJMeterTests: 'Get JMeter up & running') {
  jmeter(jmeterhome: "${dirJMeterHome}", resultlog: "${jmeterResultsJtl}", testplan: "${jmeterTestJmx}")
      property(name:"", value: "xml")
      property(name: "", value: "true")

target(defaultTarget: 'Do Everything') {
  depends(clean, init)

  println 'Starting...'




Equally of course, once one has a gant script, one can easily integrate with Hudson (which knows how to do the XSLT step itself, so the gant script could be even shorter than that shown above):

(the observant among you may notice that the charts are not that exciting; we'll see how bug 2752 works out…)

Just so you know, JMeter is not just for testing HTTP-based systems. It is equally possible to build comprehensive test plans to load test JDBC-accessible databases, SOAP-base WebServices, FTP servers, etc.

Think about this: the combination of JMeter and Hudson can recast Load Testing from a rarely-performed, expensive "magic shield" to just another test aimed at improving the quality of your code. This is surely A Good Thing.

For the curious, the actual JMeter Test Plan associated with this post is available here.

Tags: Grails, Programming

Testing WebFlow (Wonderful Webflow, Part II)

This posting follows on from Wonderful WebFlow. Probably best to take a look at that one first…

The post cited above developed a simple app. based around WebFlow, so now comes the time to test that app.

To loosely paraphrase Ford Prefect: Testing is important. Testing Dynamic Languages doubly so.

Grails understands the importance of testing and provides integrated facilities for unit and integration testing. Since WebFlow is integrated into Grails' controllers there exists a specialised integration testing facility for WebFlow. This is a little bit 'grungy' (IMHO) but is straightforward and easy to use. The Grails integration test (which, according to the Grails convention, must be stored in the grails-app hierarchy in the directory test/integration) is:

public class CalcFlowTests extends grails.test.WebFlowTestCase {
   def getFlow() { new CalcController().calcFlow }

   void testShoppingCartFlow(){
      def viewSelection = startFlow()

      assertEquals "operand1", viewSelection.viewName

     flow.params.value = '10'
     viewSelection = signalEvent('next')
     assertEquals "operand2", viewSelection.viewName

     flow.params.value = '100'
     viewSelection = signalEvent('next')
     assertEquals "operator", viewSelection.viewName

     flow.params.operator = '+'
     viewSelection = signalEvent('next')
     assertEquals "results", viewSelection.viewName

     assertEquals 110, viewSelection.model.res

This test is simple: it drives the flow through its various states, supplying the requisite parameters (it is this mechanism that seems 'grungy' to me) and performing a number of tests to confirm that the flow is operating as specified.

To execute the test, simply:

grails test-app

The test produces a number of XML reports that (via the magic that is XSLT) are transformed into HTML and plain text for the benefit of us poor humans.

Simple testing for a quite complex application. True Grails-y goodness, JUnit-style!

But wait! There's more!

Not content with this simple, essentially low-level isolated testing, Grails also has a Canoo WebTest plugin that brings a lot more to the table. Let's take a look.

WebTest is essentially a UI-less script-driven browser that is capable of evaluating its operation and the content it retrieves from the application against a series of assertions and requirements. Unlike the flow testing we have just seen this is higher-level and operates under the same conditions as a normal browser: it is subject to the vagaries of grottily-generated HTML, wierd JavaScript tricks, CSS 'goodness', the lot.

Even so, WebTest is powerful, easy to use and makes good reports. What more can one want?

As always with Grails, installing and configuring the plugin is simple:

grails install-plugin webtest
grails create-webtest Calc

This creates the files webtest/tests{TestSuite,CalcTest}.groovy (with a few others that are not immediately relevant to the task at hand).

CalcTest.groovy is generated according to the normal conventions for a Grails controller and normally would need very little hacking. Here however we have a WebFlow-based controller so a more radical makeover

is called for, leaving CalcTest.groovy looking like this:

class CalcTest extends grails.util.WebTest {

  // Unlike unit tests, functional tests are often sequence dependent.
  // Specify that sequence here.
  void suite() {
    // add tests for more operations here

  def testCalcFlow() {
    webtest('Basic Calc flow; 1 + 1 = 2') {

      invoke '/calc', description: 'Move to Operand 1 Page'

      group(description: 'Operand 1 Page') {
        verifyTitle 'Get Operand 1'

        setInputField name: 'value', value: '1'
      clickButton 'Next', description: 'Move to Operand 2 Page'

      group(description: 'Operand 2 Page') {
        verifyTitle 'Get Operand 2'
        setInputField name: 'value', value: '1'
      clickButton 'Next', description: 'Move to Operator Page'

      group(description: 'Operator Page') {
        verifyTitle text: 'Get.*Operator', regex: true

        setSelectField name: 'operator', optionIndex: 0
      clickButton 'Next'

      group(description: 'Results Page') {
        verifyTitle 'Results'
        verifyText '1 + 1 = 2'

There's really Nothing To See Here that we haven't already seen (barring the 'group' concept, but that will become clear in the fullness of time), so let's move right along.

The WebTest plugin makes testing easy:

grails run-webtest

This starts up the application "in the background" and then throws up a simple "please wait" dialog as the tests progress:

Once the tests are complete, WebTest creates some very comprehensive HTML reports in the directory webtest/reports:

It is nice to see the Green Bar reappear on these reports :-) It's also nice to see the human-readable descriptive text extracted from the various steps into the report.

You may be thinking that making a lot of tests using even the simpler Groovy syntax would get tedious after a while. You'd be correct, but this is where WebTest's WebTest Recorder comes in. This is a Mozilla Firefox plugin that does just what it's name suggests it should do:

It doesn't make a beautifully formatted, grouped script for you, but it does give a nice starting point (and may cut down on errors that could be introduced if the test were to be made 'from scratch').

I feel obliged to echo the warning that hits you as soon as you attempt to download the plugin:

WebTest is a good tool to demo: one can usually get a "wow!" or two from interested observers.

These reports are of sufficient quality that, in times of dire need, they can be placed under the nose of an interested PHB.

The use of 'group' in the test should now be clear. In the words of the WebTest manual: "…allows grouping and giving a description to a sequence of nested steps." Groups make the reports easier to understand and give structure to the test itself.

One can drill-through the various reports and even view the actual pages retrieved by WebTest during a test run; this cache of pages proves very useful when the world didn't go as expected!

So: this post has looked at two of the facilities Grails provides for testing a WebFlow for correct behaviour. It's important to note that these facilities are available to any app., not just those making use of the WebFlow facility.

Up next: the thorny issue of Load testing.

Tags: Grails, Programming

Wonderful WebFlow

A few projects ago, I had the pleasure of using the early versions of Spring WebFlow.
For the great unwashed out there, WebFlow is:

…a component of the Spring Framework's web stack focused on the definition and execution of UI flow within a web application.

The system allows you to capture a logical flow of your web application as a self-contained module that can be reused in different situations. Such a flow guides a single user through the implementation of a business task, and represents a single user conversation. Flows often execute across HTTP requests, have state, exhibit transactional characteristics, and may be dynamic and/or long-running in nature.

In other words, WebFlow makes it easy to create and control long-running, multi-page 'Wizards' that hold a stepwise dialog with the user.

Using WebFlow was straightforward and resulted in a system that was clean, well-structured and efficient…I wrote about this on the WebFlow forum, noting:

Here is my experience: "it all just works!" And well :-)

Webflow gave me no heartache at all.

WebFlow is normally configured via an XML file that tends to get pretty large pretty quickly. Tools like Spring IDE and the Spring Web Flow Visual Editor built into IntelliJ do help control the complexity but it seems as though the tide is turning against XML (something that I expect we'll all live to regret one day, but this industry seems to have no true "organisational memory": we just don't learn lessons…); maybe its the GenY-ers amongst us looking for instant gratification?

In this spirit then, this posting is all about how Grails makes living with WebFlow easier.

The project is a simple calculator: enter an operand on one page (with one form), enter a second operand on a second page/form, enter an operator on a third page/form and display the result on a final page.

It's not beautiful, I know! I'm only interested in the 'core' aspects of the application here…

The application flow allows for going back to revist pages as well as simply moving forward. Properly allowing for backward motion along is a non-trivial task, typically requiring
server-side and/or client-side state maintenance and continuation support.

In Grails, WebFlow support is a standard feature available to any controller and is configured conventionally by defining a closure with a name ending in 'Flow'…take a look:

class CalcController {

  def totalFlowsCreatedSequenceMBean
  def instantaneousFlowCountMBean

  def index = {
    redirect(action: 'calc')

  def calcFlow = {
    startup {
      action() {
        flow.flowCreatedSequence = totalFlowsCreatedSequenceMBean.increment()
        log.debug "calcFlow startup; this is flow #${flow.flowCreatedSequence}; instantaneous flow count: ${instantaneousFlowCountMBean.increment()}"
      on('success').to 'init'

    init {
      action() {
        flow.op1 = new OperandCommand()
        flow.op2 = new OperandCommand()
        flow.oper = new OperatorCommand()
        flow.res = null
      on('success').to 'operand1'

    operand1 {
      on('next') { OperandCommand cmd ->
        flow.op1 = cmd
        !flow.op1.validate() ? error() : success()
        log.debug "OP1: $flow.op1.value"
      }.to 'operand2'

    operand2 {
      on('back').to 'operand1'
      on('next') { OperandCommand cmd ->
        flow.op2 = cmd
        !flow.op2.validate() ? error() : success()
        log.debug "OP2: $flow.op2.value"
      }.to 'operator'

    operator {
      on('back').to 'operand2'
      on('next') { OperatorCommand cmd ->
        flow.oper = cmd
        !flow.oper.validate() ? error() : success()
        log.debug "Operator: $flow.oper.operator"
      }.to 'calculate'

    calculate {
      action {
        def left = flow.op1.value
        def right = flow.op2.value
        def operator = flow.oper.operator
        switch (operator) {
          case '+': flow.res = left + right; break;
          case '-': flow.res = left - right; break;
        log.debug "${left} ${operator} ${right} = ${flow.res}"
      on('success').to 'shutdown'

    shutdown {
      action() {
        log.debug "calcFlow shutdown; this is flow #${flow.flowCreatedSequence}; instantaneous flow count: ${instantaneousFlowCountMBean.decrement()}"
      on('success').to 'results'


class OperatorCommand implements {
  static constraints = {
    operator inList: ['+', '-']

  Character operator

class OperandCommand implements {
  static constraints = {
    value nullable: false

  Integer value

This Controller intiates a flow in response to being invoked via the url http://…/Calc/calc/calc.
(Application context = 'Calc', controller name = 'calc', flow name = 'calc'…it does make sense, really!)

It is much easier to read this WebFlow DSL than to read the equivalent XML configuration. Trust me! Since Grails is based on Groovy and can use closures, the WebFlow DSL also thankfully frees one from the need to create many, many small action classes (in an earlier project I had to create about 100 classes, most of which had one line of 'real' code in them. Traditional WebFlow is unfortunately a bit smelly in this regard; creating a class hierarchy helped rein in some of the nastiness and creating a parent-child relationship in the XML config file also helped, but the Grails DSL is just so much better!).

Rather than walk laboriously through each line, consider only the 'operand2′ closure in the flow. It is easy to see how the application behaves at this point (and indeed, this is the point of most DSL: easy comprehension): when the 'back' event ocurs, move to the 'operand1′ state and show the corresponding view, which-by Grails' WebFlow convention-is found in the file 'operand1.gsp'. When the 'next' event occurs, execute the corresponding closure before transitioning to the 'operator' state.

Note how the transition associated with the 'next' event has an associated closure that obtains and validates the associated posted form data as the associated custom-defined Command object is stored into 'flow' scope. Flow scope is a WebFlow 'special' analogous to 'session' scope but which-unsurprisingly-only exists for the duration of the flow.

How are the next/back events raised to the controller? They are generated as a response to the user interacting with the application and are specified in the corresponding 'operator2.gsp' view GSP. In this case, the events are generated by/correspond to the two buttons associated with the form:

<g:form action="calc">
  <label for="value">Operand 2:</label>
  <g:textField name="value" value="${op2?.value}"/>
  <g:submitButton name="back" value="Back"></g:submitButton>
  <g:submitButton name="next" value="Next"></g:submitButton>

Under the covers, the generated form becomes:

<form action="/Calc/calc/calc?_flowExecutionKey=_c06FB562D-5D87-0B93-A938-465ABAFAB026_kA42D2D04-3B12-C13D-B207-E68814B81AFC" 
          method="post" >
  <input type="hidden" name="_flowExecutionKey"
             value="_c06FB562D-5D87-0B93-A938-465ABAFAB026_kA42D2D04-3B12-C13D-B207-E68814B81AFC" id="_flowExecutionKey" />
  <label for="operator">Operator:</label>
  <select name="operator" id="operator">
    <option value="+" >+</option>
    <option value="-" >-</option>
  <input type="submit" name="_eventId_back" value="Back" id="_eventId_back" />
  <input type="submit" name="_eventId_next" value="Next" id="_eventId_next" />

It should now be clear how the events are raised and sent to the controller.

OK, so how does the server know what particular flow the generated events correspond to? The server generates and maintains a flow-unique identifier (similar to the well-known JSESSIONID) called '_flowExecutionKey.' By looking at the "under the covers" code above, it is easy to see that all flow-related requests will contain this identifier.

If you can see the correspondences between flow/transition/view/form element/Command object, you should be able to see how the whole thing hangs together.

It should now be clear that there is a fair bit of work going on under the covers: the server has to maintain the current state of a flow on behalf of the user, it has to be able to parse an action out of an incoming request and determine an appropriate transition and action closures, the correct HTML has to be generated, etc. Of course, there is also always the possibility that some exceptional circumstance may occur and this needs to be dealt with in 'sensible' fashion as well. I'm ignoring exception handling for this posting (what a cop-out, eh!); it's not difficult but we've got enough to be going on with here.

If you had to build equivalent functionality "by hand" you'd find yourself with a fair-sized project at hand, with all that implies for quality, etc. It as also very doubtful that the hand-crafted attempt would be as clean and expressive as Grails' DSL version.

WebFlow is one of those features (along with Groovy's inherent and unbeatable interoperability with Java, of course) that moves Grails away from being only suitable for small-scale "tiny apps" to being truly enterprise-grade technology.

Give it a go!

Tags: Grails, Programming

Simple Subversion

Took me about 30s to get a subversion repository up and running the other day.

I used the free VisualSVN Server running under an XPSP3 virtual machine on Microsoft Virtual PC 2007 SP1.

Great stuff.

Tags: Tools

Continuous Integration

In the far mists of time, I assisted a client with getting a project kicked off and running. Thanks to the developers who were actually running the project (not to me…), the project was eventually quite successful.

One aspect still rankles with me, however…

When the project was started, I brought up the idea of Continuous Integration (CI). "You should be using CruiseControl" I declared, flexing my intellectual muscles. And thus did it happen: the dev. team set up CruiseControl and CVS and also constructed beautiful Ant build.xml scripts to automate all aspects of their project. CruiseControl happily ensured that builds occurred on every check-in. The whole thing ran nicely.

To my way of thinking, however, it turned out that there was no value in the process; the team may as well have not bothered with CI and I eventually came to regret introducing it.

So what was missing?

Simple: testing.

CI is not about automating the build process, it is about automating the testing suite and reporting on the outcome such that code quality is driven upward.

This is what I neglected to make clear: CI is about driving code/system quality, not simply automation.

I should have said "You should be using CruiseControl to automate the suite of tests that will be developed as an integral activity of the project." To be fair to the team, they did make a few unit tests available in the early days, but workload and pressure of time curtailed that (and therein lies a lesson in its own right: budget more [in all your myriad metrics] for testing than you think is needed at first glance. …but I digress…).

Without the driver of an actively-maintained suite of tests and associated reports, CI became a distraction at best (100% test success rate all the time…whoopee). At worst, it became a cause for angst and fear: in the angst-ridden minds of management, a 'medium'-sized project instantly grew to be a large, scary one due to its "infrastructure needs" and everybody knows that large projects always fail!

Enough history.

In A Festive Testing Article, I showed easyb used in conjunction with Cobertura and Gant. Here I will briefly show how all that can be placed under the control of a CI system.

My current CI tool of choice is Hudson.

Hudson is a pure Java application, so installation is trivial: download a single war file and run. Actually, through the Magic of Java Web Start Installing Hudson just got even easier.

All configuration is done via the web GUI (as we shall see), so no messing around with scary and obscure XML files.

Hudson has plugins for Cobertura and Gant and Subversion, so it's all a bit of a doddle…as this screenshot shows:

Configuration for a project is trivial, as shown in this screenshot montage:

Husdon provides a number of useful "information radiators" to show the status of a project. The overview page gives an "at a glance" feel for the project. I like the 'weather' analogy here:

More detailed reports are possible, as this montage shows:

A close look will show that RSS feeds are available as well.

The cobertura integration is as painless and easy as it is vital:

(It's interesting to see the effect of the compiler's code-inlining optimisation here.)

All in all, a good tool: simple, extensible and solid. What more could one want? Just remember: "It's the Tests, Stupid!"

Tags: Tools

I Feel So Scrummy…

…in a good way, that is :-)

See me at: http://www.scrumalli … iles/45966-bob-brown.

And, if you need further proof:

Tags: Agile

A Festive Testing Article

I have previously talked about using easyb for unit and/or acceptance testing.

When one talks about testing, one should always also talk about "code coverage." This is true regardless whether one is talking about Pascal or Prolog, C or Groovy.

One of the most effective tools for coverage testing for Groovy is cobertura

Since the festive season is nearly upon us, I have created a small Groovy class that generates the old, Catechism Song "The Twelve Days of Christmas":

public class Christmas {

  static ordinal(d) {
    def s
    switch (d) {
      case 1: s = "1st"; break
      case 2: s = "2nd"; break
      case 3: s = "3rd"; break
      default: s = (d + "th"); break

  static line(l) {

  static verse(day) {
    def s = new StringBuilder("On the ${ordinal(day)} day of Christmas my true love gave to me:")
    s << 'n'
    if (day >= 12)
      s << line("twelve drummers drumming,")
    if (day >= 11)
      s << line("eleven pipers piping,")
    if (day >= 10)
      s << line("ten lords a-leaping,")
    if (day >= 9)
      s << line("nine ladies dancing,")
    if (day >= 8)
      s << line("eight maids a-milking,")
    if (day >= 7)
      s << line("seven swans a-swimming,")
    if (day >= 6)
      s << line("six geese a-laying,")
    if (day >= 5)
      s << line("five gold rings,")
    if (day >= 4)
      s << line("four calling birds,")
    if (day >= 3)
      s << line("three french hens,")
    if (day >= 2) {
      s << line("two turtle doves")
      s << line('and')
    if (day >= 1)
      s << line("a partridge in a pear tree.")


  static void main(args) {
    (0..11).each {day ->
      println verse(day)

The algorithm may not be the best, but it works well for my purpose here so let's see how Cobertura helps uncover the glaring off-by-one bug in this little work of art…

On the 0th day of Christmas my true love gave to me:

On the 1st day of Christmas my true love gave to me:
a partridge in a pear tree.

On the 2nd day of Christmas my true love gave to me:
two turtle doves
a partridge in a pear tree.


On the 11th day of Christmas my true love gave to me:
eleven pipers piping,
ten lords a-leaping,
nine ladies dancing,
eight maids a-milking,
seven swans a-swimming,
six geese a-laying,
five gold rings,
four calling birds,
three french hens,
two turtle doves
a partridge in a pear tree.

Using cobertura requires a three-step process. in the first step, Cobertura instruments java class files so that usage counts are maintained. In the second step, the system under test's runtime classpath is changed to ensure that these instrumented classes are used in place of the originals. The third step takes place after execution when Cobertura analyses the data and generates a report.

Take a look at the following gant script to see how all that is done:

dirSource = 'src'
dirBuild = 'out'
dirCoberturaHome = "${DEVTOOLS}/cobertura-1.8"
dirCoberturaClasses = dirBuild + '/coberturaClasses'
dirCoberturaReports = 'coberturaReports'
fileCoberturaData = "cobertura.ser"

includeTargets << gant.targets.Clean
cleanPattern << '**/*~'
cleanDirectory << [ dirCoberturaReports ]

ant.path(id: 'pathCobertura') {
  fileset(dir: dirCoberturaHome, includes: 'lib/**/*.jar, cobertura.jar')

ant.taskdef(resource: '', classpathref: 'pathCobertura')

target(coberturaInstrumentation: 'Run Cobertura instrumentation') {
  ant.'cobertura-instrument'(todir: dirCoberturaClasses) {
    fileset(dir: dirBuild, includes: '*.class')

target(coberturaReports: 'Run Cobertura reporting') {
  ant.'cobertura-report'(format: 'html', srcdir: dirSource,
      destdir: dirCoberturaReports)

ant.taskdef(name: 'groovyc', classname: 'org.codehaus.groovy.ant.Groovyc')

target(compile: 'Compile source to build directory') {
  groovyc(srcdir: dirSource, destdir: dirBuild) {
    javac(debug: 'on', debuglevel: 'lines,vars,source')

target(christmas: 'Run Christmas application') {
  java(classname: 'Christmas', fork:true) {
    classpath() {
      pathelement(path: "C:/DEVTOOLS/groovy-1.6-beta-2/embeddable/groovy-all-1.6-beta-2.jar")
      pathelement(path: dirCoberturaClasses)
      path(refid: 'pathCobertura')

target(init: 'Initialise the build, given a clean start') {

  ant.mkdir(dir: dirCoberturaReports)

  if (new File(fileCoberturaData).exists()) ant.delete(file: fileCoberturaData)

target(defaultTarget: 'Do Everything') {






For this simple class, the coverage report points us straight to a problem: The twelfth verse is never requested.

the cause is simple,

  static void main(args) {
    (0..11).each {day ->
      println verse(day)

instead of:

  static void main(args) {
    (1..12).each {day ->
      println verse(day)

Well OK. Not so impressive it seems! After all in this situation we have a straightforward class and straightforward test. A simple code-inspection would probably have surfaced the bug, but you never know: this class of error is depressingly prevalent in code.

Code coverage is valuable when testing is more complex and the System Under Test is much larger; it comes into is own when testing is the responsibility of all the members of a team (something that is a central tenet of XP, after all).

Getting Cobertura up and running is easy. Getting it integrated into a Continuous Integration system like hudson is also easy.
(By the way, if you were wondering: the price tag of the shopping list laid out in the classic holiday song "The Twelve Days of Christmas" is sharply higher this year.)

Tags: Groovy, Programming

Another Couple of Interesting Tools

These have both been useful to me on projects when I have been trying to get my head around reams of grungy C code (you know the type, where the programmer continually felt the need to use *(p + i) instead of p[i] "because it is more efficient").


Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.


Cscope is a developer's tool for browsing source code. It has an impeccable Unix pedigree, having been originally developed at Bell Labs back in the days of the PDP-11. Cscope was part of the official AT&T Unix distribution for many years, and has been used to manage projects involving 20 million lines of code!

To a degree, modern IDEs can help in the problem spaces addressed by both these tools; sadly, one just can't rely on always having a modern IDE available, however.

Tags: Tools

Unit or Functional(*), That is the Question

Another lunchtime discussion…"So. is easyb for unit testing or functional testing? Which is it?"

Does it have to be either? It is a testing tool that is driven by stories. These stories can be helping us to understand either how our units of code operate or what constitutes an acceptably-behaving system.

Since easyb is a Groovy DSL and since Groovy interoperates fully with Java, it is easy to incorporate a tool such as Canoo Webtest. For example:

description "Testing a Web Application"

narrative 'Can we test a web application in a scenario? Yes. Yes we can!', {
  as_a "Starving Developer"
  i_want "To test my web application"
  so_that "I can get a better job elsewhere"

ant = new AntBuilder()

webtest_home = 'C:/DEVTOOLS/Canoo WebTest 2.6'

ant.taskdef(resource:'webtest.taskdef') {
  classpath() {
    fileset(dir:"$webtest_home/lib", includes:"**/*.jar")

scenario "The Transentia Web Site Is Up and Running", {
  given "The URL for the Transentia Web Site"
  when "We look for the page subtitle"
  then "We must see the appropriate byline", {

  ant.testSpec(name:'groovy: Test Groovy Scripting at creation time'){
    config([host:"", basepath:'flatpress'])
    steps() {
      verifyXPath(xpath: "//p[@class='subtitle']", regex: true, text: '.*training.*')

It's good to look at this example and see the synergies Groovy brings to the task: would a 'pure' Java developer normally reuse ant as shown above?

A similar use of easyb is given in Functional web stories, but that example uses selenium, not WebTest. There is also a useful followup article covering the use of easyb fixtures to improve the script at http://thediscoblog. … ixtures-easyb-style/.

I like the idea of a single tool being able to drive various testing activities, I particularly like the way that this gives a consistent 'feel' to the reporting:

 1 scenario executed successfully

  Story: transentia story
   Description: Testing a Web Application
   Narrative: Can we test a web application in a scenario? Yes. Yes we can!
      As a Starving Developer
      I want To test my web application
      So that I can get a better job elsewhere

    scenario The Transentia Web Site Is Up and Running
      given The URL for the Transentia Web Site
      when We look for the page subtitle
      then We must see the appropriate byline

(as an aside, this consistency of reporting style can also be achieved by getting easyb to output an XML report that can then be transformed in any which way…)

So my answer to the question is "easyb can be used in any way that makes sense, but consider the value of a consistent documentation stream."

Now, just to be complete and to show that I do "eat my own dogfood", here is the associated gant script:

dirEasybHome = "${DEVTOOLS}/easyb-0.9"
dirWebtestHome = "${DEVTOOLS}/Canoo WebTest 2.6"
dirReport = 'reports'
includeTargets << gant.targets.Clean
cleanPattern << '**/*~'
cleanDirectory << [ dirReport ]

dirGantHome ="environment.GANT_HOME"
ant.path(id: 'pathGant') {
   fileset(dir: dirGantHome, includes: 'lib/*.jar')

ant.path(id: 'pathWebtest') {
   fileset(dir: dirWebtestHome, includes: 'lib/*.jar')
ant.path(id: 'pathEasyb') {
   fileset(dir: dirEasybHome, includes: '*.jar')

ant.taskdef(name: "easyb", classname: "org.disco.easyb.ant.BehaviorRunnerTask", classpathref: 'pathEasyb')

target(easyb: 'Run easyb tests') {
  ant.easyb(failureProperty: "property.easyb.failed") {
    classpath() {
      path(refid: 'pathEasyb')
      path(refid: 'pathWebtest')
      path(refid: 'pathGant')

    report(location: "${dirReport}/xml-report.xml", format: "xml")
    report(location: "${dirReport}/story-report.txt", format: "txtstory")
    report(location: "${dirReport}/behavior-report.xml", format: "txtspecification")

    behaviors(dir: '.') {
      include(name: "**/*.story")
  } "property.easyb.failed", message: "***easyb run failed")

target(init: 'Initialise the build, given a clean start') {

  ant.mkdir(dir: dirReport)


This should reinforce the message from Goodybye Ant, Hello Gant.

(*)Just to say, I dislike the term "functional testing", I much prefer acceptance testing.

Tags: Tools