The Power of Metrics to Guide Software Development

One of the big challenges with software product development is how to prioritize which features to add.  Many books and academic papers are written on this topic each year.  From my vantage point, the human centered design process is the best general process that can lead to successful products and product evolution.

Yet, as the feature requests pile up once a product is released, most product development teams succumb to whoever yells the loudest – either leading edge customers or the sales force.  Within Attenex, we stumbled on a more powerful method for guiding product development – the Northstar metric.

We started Attenex intending to build and market two products to the legal market – one for authoring structured documents like contracts (Attenex Structure) and one for discovering the relevant documents in eDiscovery litigation (Attenex Patterns – now a part of Ringtail 8).  As we delivered the products to customers, it became clear that with Attenex Patterns we had a very clear metric that provided both business guidance and feature guidance.  The metric was “document decisions per hour.”  That is, how many documents per hour could the average lawyer reviewer read and place into categories like “responsive”, “non-responsive”, and “privileged.”

With Attenex Structure, we were never able to identify any metric that mattered at both the users business level or at the guiding of feature prioritization.  As a result, we stopped development on Attenex Structure.

Attenex Patterns with the “document decisions per hour” metric proved both a marketing/sales tool to compare our product with other offerings in the market and a way to guide our feature development.

On the feature development side, the process was to continuously observe our users and understand their hassles (see hassle maps by Adrian Slywotzky) or bottlenecks in the workflow (see Theory of Constraints by Eli Goldratt).  As we spotted hassles or bottlenecks we would design a prototype and then test the prototype with the user community.  If the new features improved the “document decisions per hour” we would leave the feature in.  If the new features did not improve productivity, or decreased the productivity, then we pulled the feature out.

On the marketing and sales side, the “document decisions per hour” metric allowed the company to compare the benefits of Attenex Patterns to all of the competitors.  Because the Attenex result was 5-20 times better than other software vendors, Attenex was able to use value based pricing.

Over the 10 year iteration cycle of 450+ prototypes, we improved productivity from our baseline studies from 2X to over 50X productivity improvement in the “document decisions per hour” metric.

This entry was posted in Big Data, Content with Context, eDiscovery, Human Centered Design, organizing, User Experience, Value Capture. Bookmark the permalink.

5 Responses to The Power of Metrics to Guide Software Development

  1. Pingback: Evolving a Personal Software Design Process | On the Way to Somewhere Else

  2. Pingback: Digital Humanities – Really? | On the Way to Somewhere Else

  3. Pingback: Competing Product Design Centers | On the Way to Somewhere Else

  4. Pingback: When Science and Art Dance – Business Results | On the Way to Somewhere Else

  5. Pingback: Whiteboarding: Designing a software team | On the Way to Somewhere Else

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s