The Other 90% of Software Product Development

So you’ve just finished your alpha software product and you are ready to release it to the world to get some feedback.  Congratulations.  Now you are ready for the next 90% of the software development effort – RAAMPUSS.

In a previous post, I talked about the competing product design centers.  One of those design centers is what some pundits call the “-ilities” for the suffix that goes on so many of the categories – realiability, availability. . .

One of the fundamental challenges in new product development is balancing the drive for ever more useful functionality for customers with the need to have a very high quality product.  This paper gives an overview of the framework that I created and used at Primus Knowledge Solutions (acquired by Art Technology Group which was acquired by Oracle) to increase the less than stellar quality of our products to make sure that we met the critical requirements of some of the hardest customers to satisfy – those providing world class customer support for their own products.

RAAMPUSS is an acronym that is a short hand for eight aspects of a quality product:

  • Reliability
  • Availability
  • Adminstratability
  • Maintainability
  • Performance
  • Usability
  • Scalability
  • Security

As the volume of the product sales increases and/or the size of customer deals increases, RAAMPUSS becomes more important to the organization than incremental functionality because the product becomes mission critical for the customers.  The diagram below looks at the natural progression of a product from initial idea to something that is used between several enterprise scale corporations.  During the first three stages, the importance of functionality and will the idea work in a real world setting overwhelm the need for high quality.  But once an idea has proven itself in a pilot project, most customers want to leap to enterprise scale deployment.

However, the engineering team is so busy trying to generate functionality and find the suite spot for their product in a market niche, they are ill prepared for the sudden volume and scale demands on their product.  Life for the customer and the engineering team becomes quite difficult at this point because it is very hard to reengineer the product for RAAMPUSS while the customer is screaming for immediate action to fix their crashing software.  One customer like this would be bad enough but often there are 5-100 customers clamoring for attention with severe problems in multiple parts of the product.

RAAMPUSS can be both a diagnostic tool to assess where problems are in existing products and a development strategy for those just embarking on product development or trying to figure out how to prioritize activities for the next release of a product.

The point at which RAAMPUSS becomes critical in the lifecycle of a product business corresponds with the chasm that Geoff Moore describes in his books.  The development process that works to the left side of the chasm no longer works on the right side.  Similarly other functions of a company, most particularly the sales function, encounter this same difficulty.  Often times the managers that are great for working with early adopters are incapable of developing the right stuff for the early majority and vice versa.

Crossing the Chasm

Reliability

Does the product work according to its specification?  For the user, reliability means that they can use the product to produce work consistently.  The most obvious reliability failure in software is an application or system crash.  Less obvious is does the product work the same way each time I use it – both day in and day out as well as through upgrades.   Another form of reliability is not corrupting data, losing data, or computing results incorrectly.  Another example is when a product does the same thing inconsistently like trying to indent bullets within Microsoft word.  The inconsistency is both within a document and from release to release.

Examples of reliability issues Primus had with version 3.1 include the intermittent errors experienced by EMC, Compaq and Novell where the Versant OODB crashed every time the online backup program was run.  The regularity of the crashes (1-5 times daily) along with the time it would take to bring the systems back up (EMC client PCs took 30 minutes) led to irate customers.  More subtle are the problems related to search where the return results of an indexed solution vary depending on what phrases are used in what order.

The more stable the system, the higher the expectations for increased reliability.

Availability

Closely related to reliability is availability.  If something happens to the application, how fast and easy is it to recover?  Does it take 20 seconds to reboot (the Microsoft three finger salute) or do I require eight hours to reload a database?  When service packs are installed do I have to bring the system down for several days?

In the end availability is about uptime – 7x24x365 times the number of users.  Problems with availability can be due to either unscheduled outages as the result of bugs or scheduled outages required to install new software, bring on new users, reindex or convert a database.  The goal is for the application to be up all the time.

Availability is also related to investments in hardware to achieve fault tolerance with no single point of failure.  While the goal should be for our software to not fail, it is also important for the software to degrade gracefully.  Better to lose a single user rather than to lose the whole system bringing all the users down.

Administratability

With complex application software, one of the most expensive activities for a customer is how much time they must spend administering the system.  It starts with what is required to install a system.  It extends to how difficult it is to reconfigure the different tiers of a system.  At the end of the spectrum is how easy or difficult it is to add or subtract a user.  Additional items are keeping track of and maintaining dictionaries, reporting on usage, setting security levels, ensuring integrity of the data or knowledge.  Where is administration performed – does it occur at the server computer or can it be done remotely?  Are log files kept of all changes and/or can the system be easily rolled back to some previous environmental state?

One of the major complaints about Primus eServer products was our inability to administer application functions remotely.  A great deal of effort went into eServer 4.0 to provide a Web interface into all ADMIN functions.  Another major benefit of eServer 4.0 was the automatic installing of client software or the elimination of it with a more robust Internet Explorer like product.

Maintainability

The essence of maintainability is if something does go wrong with the software how easily can the problem be recognized as a software (versus a hardware) problem, identified as to whether it is ours or some other vendors software, whether the problem located in the code, is the problem really fixed, and is the problem resolution distributed to all those affected by it.  Ideally fixes come in three flavors:

  • What can be done to get the system back up;
  • Workaround to guard against the problem recurring or data being lost/corrupted;
  • Permanent work around to ensure that the problem doesn’t happen again.

In parallel with the immediate fixing of the problem is a Root Cause Analysis (RCA) to determine if this is an isolated fault or a symptom of a bigger design or architecture error.  The RCA then kicks off the fixing of the problem as well as the fixing of the software development process that allowed the error to creep in.  That is, how can our development process be improved so these class of errors never happen again.

Maintainability is primarily about processes and tools.   Processes range all the way from the backend – how does a problem manifest itself, how does a customer report it, how does support deal with it, how is the problem escalated into engineering, how is the problem fixed and the fix transmitted back to the customer, and how is the customer communicated to throughout this process.  At the front end is how the software is designed and architected, how is it implemented, how is it tested, and how is it delivered to the customer.

During the eServer 3.1 unstable phase, one of the major observations was that our error logging code was of very little use for high priority problems.  Thus, one of the key tasks of the Tiger Team created to stabilize 3.1 was to define a set of tools that we could put into eServer 4.0 to increase maintainability.

Performance

For our purposes we will define performance as what the user experiences.  While there are many ways to improve the perception of performance it comes down to how long does it take to perform the routine tasks that the individual user works with.  While all of us would like the performance to always be instantaneous, we’ve come to expect different levels of performance depending on the specific function.  For example, typing response time should be instantaneous (the echoing of characters or clicks).  Screen pops should be barely noticeable.  Searches of several seconds can be tolerated.  Printing can take longer although if very long, the user would like to see the printing become a batch job.

The second aspect of performance is how much the user’s perceived performance varies as environmental conditions change.  The ideal is for perceived performance not to change, but some variance is tolerated.  Environmental changes that can affect performance are:  speed of the hardware (client, server, network), number of users on the system (stand alone, average, peak), size of the database or knowledge base, and complexity of the task.  A particularly irritating aspect of performance is if it degrades from release to release.  Nothing is worse than a user having to fight a double whammy of new functionality and degraded performance that often comes with new releases.  This last aspect was one of the major reasons we focused on performance for the eServer 4.0 release because in previous releases the perceived performance got worse.  Early reports indicated that users were quite pleased with performance because it was dramatically improved at the integration point with call tracking software and similar or better search performance was experienced.

Usability

This component focuses on the usability of the product.  Today, usability is often referred to in the larger context of User Experience (UX). Typically, this function studies the user interface for good design principles and the excess of things like too many changes of context, too many key clicks, or uncertainty on the users part as to what to do next.  While human centered design is more of a front end process looking at the context of the user in the real world, usability looks at the actions of the user in relation to the actual software.  As a result, usability is often done at the end of a project when there is a functioning product to work with and to study.

Scalability

We define scalability in terms of the purchaser rather than the user.  This function defines how many users can expect reasonable performance given a particular hardware/software environment.  Ultimately this translates into the system cost per user.  Scalability tests involve running loads against standard configurations for 100, 200, 400, 600 users and so on.  The ideal is that for a given performance level there is a specified hardware configuration that can handle the tested number of users.

Security

Almost every day we read in the news of another security breach at a well known company.  One of the more recent large security breaches that went on for a longtime was with Sony’s PlayStation Network.  Computer and application security involves multiple aspects of protection of information and property from threats like theft, corruption, or natural disaster.  For any organization that has personal information or critical data, the recommended process is to have both an inhouse security team as well as contract for external privacy experts who are well-versed in how to “hack” into application systems and databases.

Test Driven Development

As a development manager begins to understand the depth of critical processes necessary to continue to improve one’s RAAMPUSS quality, it is a good time to look at Test Driven Development (TDD) methods.  While TDD does not solve all of the world’s ills, it goes a long way towards achieving RAAMPUSS goals.

Posted in organizing, Software Development, User Experience | 3 Comments

Competing Product Design Centers

Traditionally product planning is the realm of the software engineering team represented by a program manager or engineering manager and the marketing team represented by a product manager or product marketing manager.  Often, these activities became exercises in “list management” as long lists of features accumulate for inclusion in future releases and the product planning consists of a prioritization exercise.  In addition, software engineers rarely have a good view of the context of the feature decisions, which makes deciding which way to go in developing the code a challenge when there are tradeoffs.

An excellent resource for the strategic and tactical aspects of product management and marketing is Pragmatic Marketing.  Their framework illustrates the range of tasks that a product management team needs to consider:

Pragmatic Marketing Framework

Good product development teams are fortunate to have several different viewpoints represented from very direct voices of the customer, informed voices of human centered design with technology insights and market research insights.  Sorting through all this relevant research, we identify six categories of design input or design centers that provide strong, weak or implicit voices in the product planning process.  These design centers are:

  • Technology Centered Design
  • Human Centered Design
  • Customer Centered Design
  • Machine Learning Centered Design
  • Productivity Centered Design
  • RAAMPUSS Centered Design

The goal in moving forward with product planning for software products is to make explicit each of these design centers so that we can consciously and collectively arrive at defined goals for the product as a whole, a roadmap for the product, and a prioritized set of goals for each product release.

Technology Centered Design

As a software development company, the technology of the products is a key component of design.  This design center encompasses a range of design issues that are critical to a market:

  • Software Product Platform – this aspect looks at what platforms we will build our product on.  Examples of platforms include:  Microsoft Windows, Sun Unix, Linux, MS SQL, Oracle Database, IIS Web Server, Apache Web Server, Netscape Browser, Internet Explorer, Amazon Web Services (or other variants of the cloud), HTML5, and mobile platforms (iOS, Android).  Making a decision around each of these platforms determines what skills we will need and to some extent also determines the size of the market we will be going after.
  • Hardware Product Platform – what kind of hardware (server, desktop, laptop, tablet, mobile) will we require our customers to purchase.  Options range from the speed of the CPU, the amount of memory, the type of disk storage, and network capabilities.
  • Programming Language and Tools – defining the standard(s) for what language we use helps determine our ability to balance hiring profiles, what kind of performance we can expect from our products, and what kind of environment our customers will need to run.
  • Application Framework – should we have a proprietary application framework that crosses all of our products to increase the reuse of our components, do we go with commercial frameworks or open source frameworks?

If we look at research as helping to determine what is possible, the technology centered design process focuses on the art of the practical, that is, what will work reliably for thousands to millions of customers working 24×7 around the world.

Human Centered Design

Human Centered Design (HCD) is concerned with the needs of all of the people who will be using our products in one form or another.  These may range from knowledge professionals to data preparation clerks to database administrators to managers to clients.  The focus of this design center is understanding the needs of the user, their points of pain and then designing interactions that will make their lives better.  A key part of this design center is the observation of users as they go about their daily lives in relationship to the opportunities that we are trying to design solutions for.  Users are quite inventive in how they solve many problems and often our best path forward is understanding their inventive solutions.  To paraphrase Ed Lazowska, the goal here is to “understand the misunderstandings” that keep users from creating the results that they want.  The core elements of the HCD process are illustrated below:

Human Centered Design Process

Throughout the HCD process, the designer is constantly iterating through the criteria of:

  • What is desirable to users?
  • What is possible with technology?
  • What is viable in the marketplace?

Customer Centered Design

Geoffrey Moore

While Human Centered Design focuses on the user, Customer Centered Design focuses on the purchaser and influencer (see the post on Words Mean Something).  The purchaser is someone who actually buys the product.  They are usually a combination of the business manager, the IT manager and the procurement manager.  They have needs very different from the user as they are looking at the business implications of the purchase and the business relationship with Attenex.   An influencer is a person either inside the organization or outside the organization who helps set the context for the purchaser as to why a particular class of solution is important and who the key suppliers are for a given solution.  Geoff Moore has brought the interactions with purchasers and influencers alive in Crossing the Chasm and his many follow on books.   The Tipping Point by Malcolm Gladwell does a particularly good job of describing influencers and how to influence them.

One of the best processes for getting at the voice of the customer (influencer and purchaser) was developed by Katherine James Schuitemaker with her Value Exchange Relationship framework.  This process focuses on the power of the brand to create value with influencers and purchasers by establishing the context for funneling customers into the sales efforts.  Launch customers can provide a strong customer voice and a good collaborating partner.  Channel partners should become a good set of collaborating business builders.

Machine Learning Centered Design

In order to keep from becoming a one product wonder of a company, it is important at the early stage of development to invest precious resources in research.  This design center looks at what kinds of algorithms, computational linguistics, modeling and prototyping can help us stay ahead of our competitors.  The core of this design center is to capture data on every aspect of who the software deals with and profile them in great depth (with permission of course).  To further productivity, mathematicians need to bring powerful algorithms to aid us with unsupervised and supervised learning for very high dimensional data spaces.  Then these mathematics need to be combined with equally powerful visualizations and interaction designs to ensure that productivity gains are realized for the users.  This group is also responsible for looking at the next big areas of potential automation.  A good research team will constantly look for patterns in the data to implement Slywotzsky’s knowledge imperatives:

  • Move from guessing what customers want to knowing their needs;
  • Move from getting information in lag time to getting it in real time;
  • Move from burdening talent with low-value work to gaining high talent leverage.

Productivity Centered Design

Oftentimes productivity is equated with do it faster.  At the heart of how a software product team should prioritize its research and development efforts is to find and solve those problems where we can achieve at least ten times productivity increases.  Productivity is a complex interaction of “better, faster, cheaper” with ever increasing quality (six sigma) and improved business relationships (customer, supplier, partner).

To improve productivity it is important to have key metrics that are measurable and can be made visible for all parties.  We want to ensure that each feature that we add to our products improves the overall measures of productivity for our users, purchasers and influencers.  Productivity increases will include complex balancing of machine improvements and user level improvements that often times are non-obvious.  As an example at Attenex, we thought deeply about whether should we spend more machine time on identifying near-duplicate emails (reducing our throughput) in order to reduce the amount of documents that an attorney has to look at (decreased human labor).  Identifying key metrics and then making it painless to track the metrics and identify patterns is the focus of this design center.

RAAMPUSS Centered Design

While it is functionality and our selling/marketing process that gets our products into early adopter customers, it is our ability to continuously improve at RAAMPUSS which both keeps us installed and improves our reputation with our most important customers.  The components are:

  • Reliability
  • Availability
  • Administratability
  • Maintainability
  • Performance
  • Usability
  • Scalability
  • Security

The goal of this design center is to prioritize for each release which elements we will be focusing on and then to establish clear goals for that release to meet.  From a productivity standpoint, the effect of these elements of a product show up in how much labor and costs that a company bears in support of its product or in the additional sales costs to overcome objections or a poor reputation in any of the categories. In addition to the development costs, several of these components also affect the Total Cost of Ownership metrics of our customers.  As a company matures with a product and a customer base, these functions become even more important when balancing against new functionality.  Part of the development risk equation is that new functionality increases the risk of destabilizing one or more of these components.

Criteria for Prioritizing Clusters of Features

As part of the moving from what is possible or desirable to build, we need to establish criteria for the selection of a feature.  Examples of criteria for prioritizing are:

  • How does this feature reduce mean time to revenue?
  • How does this feature increase the productivity of a user?
  • How does this feature increase revenue for our customer, our channel partner or our company?
  • How does this feature reduce costs for our customer of for our operations?
  • How does this feature contribute to our core mission, vision and strategic intent?

While no single framework or process can guarantee success, the combination of the above product design centers ensures that the needs of the customer (influencers, purchasers and users) will be heard.

Posted in Content with Context, Human Centered Design, Intellectual Capital, Knowledge Management, Learning, organizing, User Experience, Working in teams | 2 Comments

Digital Humanities – Really?

Russ Ackoff shared that the best knowledge system he knew was to have an intelligent set of graduate students that knew him.  In 1985 when we were meeting regularly, he described the joy every morning of coming in and having 2-3 journal articles taped to his office door that his students thought were relevant for him in the moment.  He pointed out that the students knew his interests and his current projects and would look out for material they knew Russ would be interested in.  Russ chuckled and shared “graduate students are much better than any search engine could ever be.”

To Russ’s observations I would add that colleagues and professors who know me are also a great source of knowledge pointers, if I just remember to include them in what I am up to.

Cathy Davidson

Kate Hayles

I mentioned to my colleagues at UW Bothell that are working on the future designs for innovative universities that I was headed back to Durham, NC, where I hoped to meet with Cathy Davidson.  Gray Kochhar-Lindgren suggested that I also try and meet with Kate Hayles while I was at Duke.  Both professors were available and I looked forward to the meetings.

As I prepared for the meetings, I remembered another conversation with Russ Ackoff where he talked about his favorite design for a graduate seminar with his second and third year PhD students.  The class had only one assignment – each student had to teach Russ something that he didn’t already know.  With his impish grin, Russ described how much fun the first couple of weeks of the seminar were as the students went from thinking this class was a breeze to it dawning on them how hard it was going to be to figure out what Russ already knew.  He enjoyed the different strategies the students employed to “discover” what he already knew.

Russ delighted in the new things that he learned each semester.  However, he particularly loved how much he was able to impart to the students without ever having to lecture.  The students had to learn a large portion of what he already knew (which in my limited life experience was huge as Russ was the best systems thinker and synthesist I’ve ever encountered).

If you asked me two months ago if I was interested in learning anything new about the digital humanities, the answer was an emphatic “No.”  Yet after spending time with Alan Wood, a Chinese History professor, Susan Jeffords, an English professor (now UW Bothell Vice Chancellor), Gray Kochhar-Lindgren, a philosophy professor, and Jan Spyridakis, a technical communications professor (now Human Centered Design and Engineering Department Chair), my exposure to the humanities had increased by light years compared to the previous forty years of professional life.  The “Ah Hah!” moment that I needed to spend some serious time understanding the digital humanities came at the recent Modern Language Association meeting in Seattle where two English professors talked about Big Data and two computer scientists talked about the need for digital storytelling to go with their worlds of Big Data.  The world it is a shiftin’.

I was familiar with Cathy Davidson’s work through my research over the last two months, but I was unfamiliar with Kate Hayles work.  So I went to Amazon to see if Kate had written any books and out popped a list of several interesting titles.  I didn’t recognize any of them, but before I ordered them, I checked my Kindle library (nothing) and I went to Librarything to see if I had any of her books.  Sure enough, I’d ordered and read Writing Machines.  One of these days I’m going to have to do a better job of remembering author’s names.  So I ordered several of Kate’s books (How We Became Posthuman, My Mother was a Computer, and Electronic Literature).  Two of the books were on the Kindle so I could scan through them pretty quickly for the key themes.

As I made my way to the Smith Warehouse where Cathy has her office, I marveled at how much the Duke campus had changed over time.  When I went to Duke (1967-1971), the Smith Warehouse was literally a tobacco warehouse.  Any time you came near the building you were assaulted with the cloying smell of tobacco leaves being aged and dried.  Now it was a beautifully remodeled space of bricks and 100 year old wooden beams and floors.  I was reminded of Stewart Brand’s How Buildings Learn:  What Happens After They’re Built.

The primary topic I wanted to explore with both professors was what qualities they thought were important for an idealized design of an innovative university.  Each professor was quite articulate about their ideas for the key qualities of the new university or the new humanities department.  The short version of these qualities is:

  • Collaboration and Collaboration by Difference
  • Provide flexible spaces for collaboration that can be easily re-configured
  • Rethink the curriculum to be multi-disciplinary and jettison many of the ossified department structures
  • Shorten the formal school year to end in March with the rest of the second semester spent in community based projects where professors, graduate students and undergraduate students from multiple disciplines team up with community members to work on important local problems.

Cathy emphasized many of the issues she raises in her books and her consulting with corporations.  She shared “we have to move from an educational model which is based on testing and mastering content to a learning model that is focused on process, collaboration and learning to learn.”  She quoted from sources that describe that the average college graduate will change careers 4-6 times during their lifetime.  Not just change jobs, but change careers.  She described how every time she talks to corporate groups, the business executives demand that we change the way we teach.  Most of these business people say some variant of “it takes us two years with recent college graduates to break them of their pursuit of individual mastery and being scared of being wrong to getting them comfortable with not knowing so that they can collaborate with a diverse group of professionals with different skills.”  Their plea is to stop turning out students with skills that business doesn’t need.

The more I talked with Cathy, the more I wondered how I had missed this transition in the  humanities departments from being book based to being digitally based.  I finally asked Cathy how long has this transition been going on.  She reflected that it was about five years ago when humanities professors started paying serious attention to how computing could help their research and pedagogy.

I shared with Cathy that I was going to a Duke basketball game that evening with my nephew.  Cathy immediately used that topic to springboard to what she had learned from the social environment of Duke basketball games and how she changed her class structure. “Did you know that each year there is a student governance committee for Krzyzewskiville that takes the rules that the university mandates and then turns it into that years constitution for K-ville?  Can you believe that this system has worked since 1986?  Think about all of the issues of students camping out and the nature of 17-21 year olds potentially getting into fights and having drugs and alcohol.  There is no way it should have worked even for one year, let alone since 1986.  If you look at the ‘constitutions’ that are generated each year, they are far more comprehensive and restrictive than what Duke University requires.  So I decided to do that with my class.”

I loved the turn this conversation was taking.  I asked “I’ve tried to read everything you’ve written including your more recent voluminous Tweets and blog entries and I don’t remember seeing a discussion about starting your class with a constitution development. How long does it take?”

Cathy realized that she had not written more than a paragraph about this process and made a note to herself to write a blog entry about it.  Upon reflection, she shared “it usually takes between one and two class periods with a lot of homework crafting the Google Doc that has the constitution.  These are class periods where we’d be discussing the syllabus anyhow, but now it becomes the students’ syllabus.  The students always require more work than I would require.  And in the process about 20% of the registered students drop out, but that is OK as there is always a long waiting list.”

I can’t wait to see the blog entry and see some examples of both the K-ville constitutions and the course constitutions.  I will be interested to see how these constitutions relate to what Jim and Michelle McCarthy are trying to do with their Core Protocols for producing great teams who produce great products.

As I walked out of the Smith Warehouse and started my walk to Duke’s East Campus to meet with Kate Hayles my head was hurting with the implications of Cathy’s research and observations for the future of business as well as the future of the university.  I was shaking my head wondering how I’d missed this transition in the humanities to a digitally based paradigm.

Then I remembered that I’d glimpsed this world when I came across Franco Moretti’s Graphs, Maps, Trees: Abstract Models for Literary History (and the recent response to it – Reading Graphs, Maps, Trees – critical responses to Franco Moretti).  I bought the book more for its collection of visualizations in a topic area I wasn’t familiar with.  I was fascinated with the notion of “distant reading” that the author espoused.  Yet, like a lot of other concepts I’ve encountered over the years, I did not do much with it.

With my trusty iPhone 4S smart phone I was able to navigate my way to Kate’s office. What did we ever do to make our way in the world without these amazing devices?

I was less prepared for my meeting with Kate Hayles than I like to be.  However, she was very gracious and engaging and asked for some of the background on why I wanted to meet with her.  I described a little bit of my background and that Gray Kochhar-Lindgren had suggested that I meet with her so that we could gain her insights on how to design the idealized university.

As we talked she made notes of some of her books that might be of interest to my intellectual pursuits.  We started with a discussion of what her proposal for the restructuring of digital humanities looked like:

  1. Restructure humanities as a comparative study rather than being organized by either nationality (American, British, French), genre, or by century. She suggests that epochs now be defined by their medium (oral, print, digital) as the lines between previous ways of characterizing humanities are quite blurred.
  2. Shift how we think.  Digital humanities is shifting not just the answers but also the questions.  Digital humanities is a “technogenesis” as we are co-evolving along with the media.  Technology has changed how we read and we are changing neurologically as we read and use technology differently.
  3. Understand that electronic literature is different than print literature.  Computation is now a theoretical issue for the humanities, not just for the sciences.

The core part of the transformation to digital humanities is understanding that the overwhelming focus on print as the medium for the last 300 years has evolved to a new digital medium.

Since I am mostly a bottom up kind of thinker, I asked Kate to give me an example of what she meant.  She pointed to an example in the print world of a shift in media.  When William Blake first published his poems he wanted complete control of the publishing process as he wanted his poetry “read” surrounded by appropriate artwork (see William Blake Archive).

The reader of the original poetry would have a very different experience than a more modern print edition of The Poems of William Blake which is just the plain text:

Similarly, Kate pointed out that when print “texts” are translated into the digital medium they become different.  They are “read” differently.

A recent Wall Street Journal article describes this shift in media as “Blowing Up the Book”. One of the adaptations to eBook format mentioned in the article is T.S. Eliot’s “The Wasteland.”  This iPad app “includes a facsimile of the manuscript with edits by Ezra Pound, readings by Eliot recorded in 1933 and 1947 and a video performance of the poem by actress Fiona Shaw.”

If you compare the print version of the poem with the enhanced version there is a very different understanding of the poem in the electronic version than in just the “plain text.” The following three screen shots give you a sense of the richness of the electronic version:

Table of Contents of the iPad app "The Wasteland"

Eliot Scholars commenting on "The Wasteland"

My favorite “digital media” variant within the app is Fiona Shaw performing the poem while the poem’s text is also presented on screen with the current line of the poem she is speaking highlighted in blue.

Fiona Shaw performing "The Wasteland"

Slowly but surely I was beginning to get a sense of what Kate was describing. This discussion started reminding me of the philosophical question “Can you step into the same river twice?”

As I look at these new forms of digital text where the text is embedded in art, I reflected on a recent conversation with Jim and Michelle McCarthy where they showed me examples of reports they gave to clients.  These reports were fragments of text placed on top of the team art that was generated during one of their Bootcamp weeks.  Both of these discussions reminded me of Nick Bantock’s series of books that started with Griffin & Sabine: An Extraordinary Corespondence. Bantock created a book as a series of illustrated postcards and letters for the user to “experience” the correspondence.

Illustrated Book as Postcard Correspondence

Given the power of the inclusion of team art on the Bootcamp weekend, I wondered if we should be doing that with our emails.  Instead of sending plain text emails, we should surround our text with appropriate art to reinforce our message. Stan Davis in The Art of Business suggests that the absence of art in the workplace was one of the explanations for the lack of creativity and innovation.

Then, Kate hit me with the real paradigm shift here.  Along with comparing “texts” across different media, she is using literary critique skills to critique code.  She described this emerging field of critical code studies.  I wasn’t sure I had really heard what she just said so I asked for a specific example.

Kate explained “We are now as interested in critiquing the software as we are in critiquing the text.  There are several efforts under way to have side by side displays of the ‘digital text’ and the software that implements the digital text.”  Now I knew that I had just fallen down Lewis Carroll’s Alice’s Adventures in Wonderland rabbit hole.

“Let me see if I understand this right,” I asked.  “You mean to tell me that Humanities students are both interested in software and have the ability to critique and write software in an humanities course?”

Kate looked at me a bit like I was a Freshman student, and patiently explained “of course, this current generation is interested in software.  This is the digital native generation and they are eager to do the software explorations.  They are frustrated with those of us from the old school who only want to focus on print.”

“Let me try one more time.  There are not any humanities majors I know (including one of my children) who have the least bit of interest in computing.  They chose the humanities so they could stay away from science, math,  and computation,” I asserted.

Kate just smiled and suggested that I ought to sit in on one of her classes where they do exactly what she is describing – study comparative literature by creating and critiquing software.  Kate said that given this turn in the conversation she would send along a couple more chapters from her latest book.

I knew that I needed more grounding in what Kate was describing so I asked for some specific examples.  She pointed me to Mark Marino’s Critical Code Studies to give me an overview of this simultaneous critique of the text and the code.  Another researcher in this area is Alan Liu with his Research-oriented Social Environment (RoSE) project.  She suggested I look at John Cayley.  I particularly liked Cayley’s Zero-Count Stitching or generative poetry (I wonder if somebody will combine Zero-Count Stitching with the generative poetry on Sifteo Cubes).

However, the example that really grounded me in what Kate was trying to articulate was Romeo and Juliet: A Facebook Tragedy.  This research project involved a group of three students translating Romeo and Juliet into Facebook.  The students described their results:

“Reading the story as we have created it requires users to navigate through various Facebook features such as the “Wall,” “Groups,” “Photos,” and “Events.” Following the story in this way is similar to a work of hypertext fiction. However, the advantage offered by Facebook is that the interactions are ordered and timestamped, allowing for users to more easily discern which interactions come first in which progressions. We feel that this means of presenting a story offers a benefit of hypertext, forcing users to interact with the text, but at the same time it cuts down on much of the confusion by clearly communicating the progression of the overall plot.

Character interaction map from Facebook version of Romeo and Juliet

“Manufacturing character profiles based on the limited information in the text was difficult. We relied on individual interpretation and key themes surrounding each character. We supplied interests, books, movies, music, etc. that individuals with those character traits and personality types would be likely to enjoy. Character development was further facilitated by use of various applications and groups which we had characters add or join in order to reflect what we interpreted as their key traits. We feel we have provided somewhat more complete profiles for each character, hopefully to the aim of making them more relatable and providing more depth.

“The project also unexpectedly became an exploration of how virtual role-playing could potentially produce a simulation or model of events in plays and/or novels. Despite the limited nature of the character profiles offered in the original text, enough detail was present to conclude certain character types and the ways that certain characters would act. With close reading, we were presented with certain constraints (ie: character traits, personalities, and relationships), and we had to make sure that these constraints were incorporated into our simulation aside from use in the creation of the profiles. For example, Tybalt in the play is an angry character. Therefore, he was permitted to only perform angry actions and have angry interactions with others on the site. With these constraints, group members attempted to play out the rest of the story while keeping in mind that certain actions or interactions needed to occur for the plot to move forward.”

I was really hooked now and could not wait until I got back to a computer to go online and explore these links and examples. What wonderfully creative ways to learn both narrative structures and programming.  I know I have just found an important source of inspiration for the next generation of “content with context” software I want to build.

“One of the most important vehicles for the digital humanities is to create projects.  An example of a project at Duke is The Haiti Lab (Cathy Davidson also used this example). The project focuses on a wide range of topics associated with Haiti including art, demographics, and epidemiology.  The project members provide a vertical integration with undergraduates, graduate students, post doctoral researchers, and professors,” she elaborated.

Our time was almost up and I knew I could find out more about these topics from her books, her chapters that she would send, and poking around her online references.  So I moved on to ask her if she were doing an idealized design of a university, what were some of the qualities that she would want embraced in the design.  Kate shared her top three qualities:

  1. Collaboration.  The focus of everything in the new university has to be collaboration.  There is just too much for any one person to master.  We have to prepare students for the way of the world now – collaboration.
  2. Flexible spaces.  Space is more bitterly fought over within the university than any other resource.  Yet, our facilities are designed almost exclusively for lecture based classes.  We need spaces that are open and can be reconfigured quickly with no fixed seating.  We need spaces where work can be left on the walls or partitions so that they can be seen and commented on by others.
  3. Rethinking the curriculum. We need to jettison the categories and departments that don’t make sense anymore.  So many of the departments within the university are ossified and self-perpetuating.  The sciences are much better about regularly revisiting the curriculum than the humanities.  The curriculum has to be multi-disciplinary.

I thanked Kate as we finished up and asked her if she would invite me back in the fall to sit in on one of her courses that explored humanities students creating the software for “digital texts.”  Kindly, she thought that would be a great idea.

The next morning the three chapters she had promised from her forthcoming book How We Think: Digital Media and Contemporary Technogenesis showed up in my inbox.  On my flight back to Seattle, I read these chapters along with finishing up David Weinberger’s Too Big to Know. The unintentional reading of these two documents simultaneously shed even more light on the challenge of “networked knowledge structures” which require collaboration and story telling to make meaning.  Kate shared that the timeless questions from her perspective are:

  • How to do?
  • Why we do?
  • What it means to do?

She points out that the latter two questions are what the humanities are really good at understanding. My focus is on the first two questions.  I guess we will meet in the middle.

While I was very appreciative of the work that Kate and her fellow travelers were creating, it never dawned on me that from completely different directions we might be developing the same types of tools.  In Chapter 2 of How We Think, Kate pointed to the Digital Humanities Manifesto 2.0 to describe the first two waves of the new field:

“Like all media revolutions, the first wave of the digital revolution looked backward as it moved forward. Just as early codices mirrored oratorical practices, print initially mirrored the practices of high medieval manuscript culture, and film mirrored the techniques of theater, the digital first wave replicated the world of scholarly communications that print gradually codified over the course of five centuries: a world where textuality was primary and visuality and sound were secondary (and subordinated to text), even as it vastly accelerated the search and retrieval of documents, enhanced access, and altered mental habits. Now it must shape a future in which the medium‐specific features of digital technologies become its core and in which print is absorbed into new hybrid modes of communication.

“The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative in character. It harnesses digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation. Such a crudely drawn dichotomy does not exclude the emotional, even sublime potentiality of the quantitative any more than it excludes embeddings of quantitative analysis within qualitative frameworks. Rather it imagines new couplings and scalings that are facilitated both by new models of research practice and by the availability of new tools and technologies.”

As I eagerly read more, I realized that the tools of the first wave of digital humanities were trying to recreate what we built with Attenex Patterns.  The second wave of digital humanities was partially implemented in Attenex Patterns and is extended through what I’ve been calling “content in context” as I research and design this tool set for applications like patent analytics and loyalty marketing. This new tool set provides the visual analytics needed for semantic networks, social networks, event networks, and geographical networks. Who would believe that I would find such great resources in a distant context – digital humanities – to extend my design to include features like curated story telling.

Once again I am reminded of the old proverb “When the student is ready the master will appear.”  Like Russ Ackoff, I am grateful to a collection of students and colleagues who with gracious synchronicity point me to the human talent that I need when I need it.

Digital Humanities – Really?  Yes, Really!

Posted in Content with Context, ebook, Human Centered Design, Idealized Design, Intellectual Capital, iPad, Knowledge Management, Learning, Relationship Capital, Russ Ackoff, social networking, Teaching, University, WUKID | 13 Comments

Cameron Crazie for a Night

I am an over the top obnoxious Duke Men’s Basketball fan.  I have to be as the rest of my siblings and my wife and her siblings are Carolina graduates (now there is an oxymoron). Ever since I entered the hallowed halls of Cameron Indoor Stadium on the Duke West Campus as a freshman, I cheered the Blue Devils through good years and bad.  Until Coach K came along there were a lot more bad years than good.

As the gods would ordain it, I was in Asheville, NC for a family life event last week.  When I realized that I would be so near Durham, NC, I decided to spend a couple of extra days and see if I could get meetings with a recent addition to my “invisible university” Duke professor, Cathy Davidson.  I arranged for the meeting and called my sister in Chapel Hill, NC to see if I could spend the night with her family.  She was overjoyed and reminded me that it was my nephew Ross’s 16th birthday.

At our family gathering, I asked my nephew, Abe, whose son just turned 16 what I could get my nephew for his birthday.  He laughed and said “If you are not giving him a car, 16 year old boys don’t seem to care about much else.”  A car was out of the question, but I got to wondering if Duke might have a home game on Thursday night.  As luck would have it, Duke had a home game with Wake Forest.  So I then went to Stubhub to see if there were any tickets available.  Eureka.  There were.  However as I checked out the prices they spiraled upward to over $200 a seat as game time approached.  As much as I would like to irritate my sister, these prices were a little rich for me.

My sister suggested that the Carolina Hurricanes hockey team might be in town and we could go as a family group.  Indeed the Hurricanes were in town so that was a viable option.  However, I really wanted to see if I could get into Cameron Indoor Stadium and irk my sister something precious. So I checked the Duke official ticket website and I couldn’t believe it.  There were lower bowl tickets for $65 a piece.  It never occurred to me that I could get a seat at floor level in Cameron.  I wasted no electronic time ordering the tickets.

I picked up my nephew, gave him his official Duke basketball shirt (angering my sister in her UNC sweatshirt mightily) and we headed to Durham for the game.  When I’d picked up the tickets from Will Call earlier in the day, the agent said that I should get there at 4pm to lineup as the tickets weren’t for reserved seats.  As we hurried to the game, I made sure that I took Ross through Krzyzewskiville to see the camping students.  We were able to get in line about 5:30pm and we were only 40 fans back.  Earlier in the day, Cathy Davidson talked about the amazing K-ville constitution that the students develop and ratify each year.  This model of student governance is what led her to have the students develop a constitution in her class.

The doors opened at 6pm and we crowded in.  When what to my wondering eyes would appear, we were channeled into the Duke student section.  We were actually going to be able to stand on the top row of the student section bleachers directly across from the Duke bench.  I had truly died and gone to Duke Blue Heaven.  Ross had an ear to ear grin on his face and was distracted in every direction looking at the goofy costumes and painted students and the never ceasing cheers.

The last forty years of living were melted away and I was that goofy teenager attending my first Duke Basketball game in 1967.  The mind is an amazingly plastic set of memories allowing me in a matter of a few minutes to go from an accomplished professional to a young college student cheering his brains out.

As I scanned the stadium’s rafters, there were lots of wonderful additions in the form of four National Championship banners.  The newest addition was a big banner celebrating Coach Mike Krzyzewski’s becoming the NCAA’s winningest men’s basketball coach.  Lots of the attendees were wearing T shirts proclaiming “903 wins and counting.”

Then the memories returned one by one.  I remembered sitting under the Duke basket one night and having future Senator Bill Bradley from Princeton and NBA Hall of Famer land in my lap after being fouled by Mountain Mike Lewis from Missoula, Montana.  I remembered the feverishly awaited showdown with the University of South Carolina when they were in the ACC.  We, Cameron Crazies were out early that night for blood and couldn’t wait to swear at full voice at John Roche and Bobby Cremins and the rest of the “dirty” Gamecock players coached by the “get all red” in the face New York Irishman, Frank McGuire.  Coach McGuire was doubly hated because he had coached successfully at UNC Chapel Hill for many years winning a National Championship in 1957.

Since the students could be right behind the visiting team’s bench, we were all wondering how McGuire was going to get any coaching in with the shouted profanities that would be going on all night.  Imagine how we all were rolling on the floor laughing when we walked in and saw eight New York Irish Catholic priests sitting in the bleacher row right behind the South Carolina bench.  Assistant Coach Hubie Brown had an uncle who was a priest and he’d paid for his uncle and seven of his colleagues to come down and “protect” one of their own in Frank McGuire.  I don’t know that any cameras captured Coach McGuire’s face as he broke up in laughter when he saw his protective phalanx of Irish priests.  Frank eagerly went over and shook the hands of all the priests and thanked them for being his guardian angels that night.

I have no idea who won the game, but I do remember that strangely enough there was no profanity in the air.

When I started at Duke, freshman were not allowed to play on the varsity team so there was a separate freshman basketball team.  Two of the team members lived in our dormitory.  I can remember many nights coming back to the dorm and having to slither my way through the narrow, short hallway where three 6′ 10″ outsized human beings were gathered around hunched over to avoid hitting their heads on the ceiling.  I didn’t even come up to their belt buckles.  I can remember sitting in the stands at their games and looking at the point guard, Dick DeVenzio, and marveling at what a shrimp he was. Then my roommate would laugh and remind me that Dick was taller than I was.

It is raining threes at Cameron

Before I knew it the Duke-Wake Forest game started, and there I was with the most unexpected seat in the house.  I was back in the student section, the Cameron Crazies hallowed ground.  I was taking pictures from every direction and emailing my brother and sister and taunting them about how great Duke is.  Their taunts came flowing right back.  All is right in the world.

As expected, the Cameron Crazies cheers were inventive as ever.  So many of them never make it to TV.  I particularly loved the faint murmuring of “four, four, four” accompanied by the waving of our fingers at the player whenever a Wake Forest player received their fourth foul.  Yet, I was bummed as I never heard the cheer I was really wanting to hear. Oh ye of little faith.  Wait for it … In the last thirty seconds of the game, the Crazies cranked it up “Go to Hell Carolina!  Go to Hell!”  Yes, all is really right with the world now.

I sent photos out to my children and my sports fanatic lawyer daughter, Maggie, immediately replied that what kind of dad was I because I’d never taken her to a Duke basketball game.  Now that I know that I can get Duke basketball tickets I look forward to fixing that oversight.

To put a delightful stamp on the night, Duke beat Wake Forest, 91-73.  What a joy to be 17 years young again and remind myself about where and how my life’s journey started – growing up in Cameron Indoor Stadium.

For those of you not familiar with the Duke-Carolina rivalry, I am told there is a wonderful book about it – To Hate Like This is to be Happy Forever:  A Thoroughly Obsessive, Intermittently Uplifting, and Occasionally Unbiased Account of the Duke-North Carolina Basketball Rivalry.  Even though I buy hundreds of books a year, I’ve never felt the need to buy this book – I’ve lived it for forty years.

Shortly after sharing this post with my brother he sent this photo to remind me that there is another side to this story:

After posting this blog entry, several articles are showing up on the decline of interest in Duke Basketball by the students.  Their boredom was my gain in being able to relive the Duke Basketball experience.

Posted in Sports, Travel, University, User Experience | 3 Comments

Beautiful Day at Duke University

What a quick way to drop away forty years of my life as I revisited Duke University this week.  As I walked around the quadrangles, so many memories from my four years as an undergraduate came flooding back.  Thoughts I had not remembered for a long time seemed like just yesterday.

The sky was a deep “Duke Blue” today, not that Carolina faux blue.  Enjoy a couple of pictures from West Campus and East Campus.

Duke Chapel on West Campus

Duke East Campus

What made the trip special was meeting with Cathy Davidson and Kate Hayles in the afternoon to get as much insight into the digital humanities as I could absorb.

Synchronicity struck as the Duke Men were playing Wake Forest in basketball and I got to take my 16 year old nephew to his first Duke Basketball game.  We became Cameron Crazies for a night.

Posted in Learning, Photos, Travel, University | Leave a comment

Envisioning the Visual Analytics Future – circa 1986

The Fantastic Voyage – Computerworld, November 24, 1986 (John Kirkley)

The following article appeared in Computerworld and described a talk I gave about a potential vision for a powerful visual analytics user interface.  I had not remembered this article until finding it while preparing a series of blog posts on the Making of ALL-IN-1.  As I re-read this article, I realized how much of this vision we captured when we created the Attenex Patterns product for legal electronic discovery in 2000.  While my conscious mind had forgotten all about these ideas, the thoughts were clearly wired into my thinking process.

“Skip Walter clapped his hands together loudly, startling several people in the front of the meeting room.

“Walter, Digital Equipment Corp.’s manager of business office services and applications and the “father” of All-In-1, was making a point.

“He was telling the attendees at a recent industry executive forum about some of his explorations into the nature of communications, explorations that could eventually lead to the design of new and radically different office information systems.

Sophisticated methods

“Right now, he said, the computer human interface is primarily visual and character-based. It works fairly well. But every day, in the simplest person-to-person interchanges, human beings use far more sophisticated methods of assimilating, storing and communicating information.

“To illustrate, Walter recalled a financial officer at DEC who used an interesting analogy to explain why substantial cash reserves were necessary for a fast-growing company.

“It is like driving down a highway, the man said, in a car marked Income. Right behind you is another car with Expenses painted in big red letters on the side.

“Now you know, for safety’s sake, the distance between your car and the car behind you — the safety zone. That is your cash reserve.

“Now if you’re driving at a sedate 10 mph, you don’t need much space between vehicles. But here you are, clipping along the thruway at 60 mph in your souped-up, fuel-injected Income sports car. If you’re without adequate cash reserves, you’re racing down the highway with that Expenses car a mere four inches from your rear bumper. If your income falters for even an instant, what happens? Wham! And here Walter clapped his hands together, making his listeners jump.

“The story, he explained, took a dry accounting idea and made it understandable and memorable by appealing to all the senses we use to assimilate information — the visual (you can see the cars), the auditory (the hand clap) and the kinesthetic (you can feel yourself speeding down the road and sense in your gut the wrenching impact of two crashing vehicles).

“What people want, Walter explained, is communication, not information. ‘I receive 100 mail messages a day,’ he said. ‘That’s 600 to 700 pages of information. I can’t physically scan, much less read to understand, all the trade publications and books I need to. Or talk with all the people I need to. I don’t need more information; what I need is a way to communicate with others and with myself about the meaning of facts — not moving these facts back and forth.'”

“Walter was delving into difficult questions. He was probing that often explored but little-understood arena where people, processes and technology combine to form what he characterizes as a living, intelligent structure.

“William Wordsworth, when writing about the modern scientific reductive method, which attempts to understand a process by chopping it up into separate pieces, said, ‘We murder to dissect.’ The point is that the intelligence of the structure cannot be isolated: It is enmeshed in the total structure itself.

“To communicate knowledge across this living network of people, processes and technology, new and innovative methods of presenting information must be developed. They must involve our visual, auditory and kinesthetic senses.

“To illustrate his point, Walter unveiled some proprietary research on which his group is working. He showed several short videotapes about a mundane subject — data
base design and information retrieval.

“But the tapes were far from mundane. The attendees saw the data elements in three dimensions and in color. Elastic connectors, fine white filaments, stretched between the data elements, visually indicating the web of relationships. It was reminiscent of the film Fantastic Voyage, in which the characters, miniaturized by technology, enter a man’s body and use a tiny submarine to sail through the uncharted regions of his body.

Shifting relationships

“In the DEC video, you move in three dimensions among the data, changing it, rearranging it, retrieving it, observing in real time how the relationships between elements shift.

“More important, because of the way the data is presented, you are able to bring your intuitive faculties to bear as you roam this digitized landscape.

“The videotapes were rudimentary, but the possibilities are fascinating. Imagine adding sound and a joy stick. You could zoom among the towering structures that you have built like a intergalactic fighter pilot from Star Wars. Others could join you in this network of information and ideas, and, like explorers mapping uncharted territory, you together discover new relationships, new roads to explore. Unlike real life, if you fall off a cliff, it’s not fatal; you simply push the reset button and try again.

“As Walter sees it, the next step is deceptively simple but hard to realize: the design of human interfaces that use sound, pictures and movement. As this approach develops, we will be making the first tentative steps toward tapping the tremendous capabilities latent in the partnership between man and technology.”

As I get ready to develop the next generation visual analytics software, it is a delight to see how much of this thinking has been a part of my conscious and unconscious processes for forty years.

What else should we be thinking about as we launch the development of “content with context?”

Posted in ALL-IN-1, Content with Context, eDiscovery, Human Centered Design, Idealized Design, User Experience | Leave a comment

Ode to Steve Jobs

On the way to somewhere else in preparing the last couple of blog posts, I came across a reflection document I prepared in 1990  “ALL-IN-1 Ten Years Later.”  Buried in the document was an article that caught my eye about Steve Jobs vision for NeXT Computer.  Now that we are twenty years farther along, I found it interesting to reflect on Steve Jobs vision and the ALL-IN-1 vision.

In the January 29, 1990, Businessweek, an article appeared about Steve Jobs vision of what the NeXT computer meant for business. NeXT is about making Electronic Mail systems happen. Excuse me. Maybe timing is everything, but I thought that was our vision for ALL-IN-1 in the 1980s. Then I got to thinking that for his target market – PC users upgrading to workstations and Local Area Networks – electronic mail that automates workflow processes (like expense reports, capital appropriation requests, etc … ) probably is a far out vision. I also remembered a conversation with the DuPont account team as DuPont was doing a review of their return on investment for their ALL-IN-1 systems. DuPont realized they had only put the infrastructure in; they had not implemented the customized work flows of the original plan.

The following article represent a view of time of the 1980s in relation to the VISION of what office automation is (could be) about.

The Third Wave According to Steve Jobs

“What’s good for Next Inc. will be good for Corporate America. That’s how Chairman Steven P. Jobs sees it. “We put a NeXT on each desk, and it changed the company in ways I never expected,” he says. Those computers, networked and programmed with sophisticated electronic messaging software, have nipped the incipient bureaucracy that can slow even a 300-employee startup. Once refined, Jobs adds, such systems will launch a third wave in PCs – “far bigger than spreadsheets and desktop publishing.”

“Jobs says that the third wave will raise productivity by doing away with paper memos, forms, and even phone calls. At NeXT, for instance, purchase requisitions are written on a computer, passed along the network for approvals, and checked against the budget. Only then is a paper purchase order printed. Check requests work the same way. Schedules, notices, and announcements are posted electronically. And because electronic mail lets participants prepare better, meetings are more productive. “They’ve been cut in half, and more people get involved in key decisions,” Jobs says.

“Detractors note that this is not unique and can be accomplished with cheaper computers, such as IBM PCs. “I don’t see where it’s all that different,” says John R. Lynch, director of business markets for NeXT rival Sun Microsystems Inc.

“But Jobs insists that built-in features, such as software that lets the Next computer do several things at once, will make it a networking standout. PCs have options for handling digital sound for voice mail, but sound is standard on Next, as is voice-mail software. Today, Next uses custom software to exploit such functions. When third-party companies develop special software for that – later this year – then Corporate America can try the third wave, too.”

Rest in peace, Steve Jobs.

Posted in ALL-IN-1, Knowledge Management, User Experience | 2 Comments

Good Software Never Dies – ALL-IN-1 becomes Enterprise Vault

In 1979, John Churin and I created an enterprise Office Automation product called ALL-IN-1.  I left the full time management of the project in 1986 and then left Digital Equipment Corporation in 1990.  For some 18 years, ALL-IN-1 generated $1 Billion in sales for DEC.  I next encountered ALL-IN-1 while consulting with Health Partners in 1999.  I was pretty confident that the software wouldn’t last beyond 2000 because of several Y2K issues I knew were buried in the software.

By 2000, I was enmeshed in yet another startup, creating Attenex Patterns for eDiscovery. In 2007 we lured Greg Buckles (now of eDiscovery Journal) away from Symantec where he was a senior product manager for the Enterprise Vault product.  Greg has an impish sense of humor and was constantly dropping random hints that we had a very interesting professional connection.  Even when he talked about spending a lot of time in Reading, England when he was at Symantec, I still didn’t catch on.

One afternoon, in a bar of course, at the St Paul Hotel in St Paul, MN, with our colleagues from a recently completed EDRM meeting, we were challenging each other with past war stories. Present at the table were George Socha (founder of EDRM), Laura Kibbe and Kevin Esposito (formerly Pfizer’s Directors of eDiscovery), Greg and me.  Greg suddenly announced “There is somebody at this table that has created the two highest revenue generating products in the eDiscovery industry.  Can you guess who?”

I knew that I had created one of them in Attenex Patterns.  Yet, as I looked around the table, I wasn’t aware that any of my colleagues had ever created even one software product.  Greg elaborated further “And Pfizer uses both products every day.”

Now we were all confused. Then with a big Cheshire Cat grin, Greg shared “Skip, it is you.”

I responded “What are you talking about Greg?  I created Attenex Patterns, but not any other eDiscovery products.”

Greg’s grin got even wider when he shared “You never knew that Symantec’s Enterprise Vault product was really ALL-IN-1.”

I had no idea.  So for the next half hour, Greg shared the story of how when DEC was sold to Compaq, Nigel Dutt, one of the UK ALL-IN-1 developers, bought ALL-IN-1 and turned it into today’s Symantec Enterprise Vault (after some intermediary mergers and acquisitions).  I was stunned.

A month later on one of his visits to Seattle, Greg came over to our house for a wine and glass tasting with some wonderful Archery Summit Pinot Noir.  He was kind enough to share the evolutionary story of ALL-IN-1 with my wife.  She pleaded with him to write the story so that she could share it with our children.  Greg agreed.

A few days later this wonderful fairy tale showed up from Greg:

“Once upon a time, in the old kingdom of DEC, there lived a wizard named Skip. This wizard belonged to a cabal that was building a special spell, called the All-In-One Spell. The cabal worked long and hard to make the spell that would store whatever you needed. Then the evil Compaq Compact plotted the overthrow of the kingdom of DEC. The wizard Skip escaped the invasion, but many of the cabal were captured and forced to work on the spell, turning it into a spell vault to hold all the whisperings within a kingdom safely locked away.

“But the evil Compaq could not control the cabals, wizards and spells that they had taken, so many wizards escaped and took the spells with them. The wizard Nigel escaped with the Vault and ran back to the same castle in Reading where the spell had first been born. He gathered as many of the original cabal as he could find and they cast the spell for many kingdoms.

“The Vault spell was so powerful that almost all of the greedy gnomes with banks along the great Street of the Wall used the spell to listen to the whispers of all of their trader gnomes. But the trader gnomes got greedy and began to steal from everyone. So the great sheriff Spitzer declared war upon the merchant gnomes and all the other great houses of trade. He demanded all of their hoarded whispers to find the bad gnomes.

“In the far west, the House of El Paso struggled to fend off the attacks of the sheriffs. They hired a young wizard named Greg to find all the whispers and deliver them to the sheriffs, but he needed the Vault spell to do it. He made another spell to work with the Vault. A spell that saved his House and many other Houses under assault.

“The time of troubles passed, but Kingdoms, Counties and Houses across the world wanted the Vault and the new Discovery spell to protect themselves. Great bags of gold came to the Knowledge Vault Sorcerers and they grew rich. So rich that great kingdoms vied to buy the secret code of the Vault. The kingdom of Symantec bought the spells and many of the Cabal went forth to start new cabals that created spells to understand all of the whispers captured in all of the Vaults.

“Wizard Skip had created many spells since his time with the kingdom of DEC. His reputation had grown and his spell was the strongest spell for understanding the Patterns in the whispers. The House of Ten Times was known far and wide for the power of their spells.

“Always seeking to work with the best spells in all the lands, wizard Greg joined the House. He knew that wizard Skip had helped create the great Vault spell, but did not reveal his own spells to Skip. For many moons they worked together, but the House of Ten Times had  become too complacent with the success of the Patterns spell. Each wizard decided that it was time to leave. Only then, did the younger wizard show to his elder how his code had grown and what it had become.

“Small spells may become great spells and great spells may give birth to small spells. The wheels turn, but the Patterns remain the same.”

What amazes me some 32 years later is how a software product  two of us started in a tiny office in Charlotte, NC, is still alive and well and generating more than $250M a year in revenue.

Posted in ALL-IN-1, Content with Context, Relationship Capital, social networking, User Experience, Value Capture, Wine | 8 Comments

ALL-IN-1 Philosophy

In the process of describing the Making of Enterprise Software – ALL-IN-1, I came across the ALL-IN-1 Philosophy we published in 1982.  I was impressed at how much I still adhere to this philosophy thirty years later.  This philosophy was the product of my collaboration with John Churin.  It is interesting to reflect on this philosophy as we experienced the evolution of office automation systems from ALL-IN-1 to Lotus Notes and to the present day of Microsoft Office and Exchange and Google Mail and Apps.

Behind the development of any major software product is a philosophy of why the product was developed and a model of the environment in which the product is expected to perform. Underlying the philosophy are also the business requirements of the organization funding the product development. This chapter (circa 1982) provides the major elements that went into the development of the Digital Equipment Corporation ALL-IN-1 product.

The following is a list of the major philosophical points:

  1. Provide automated information access to the widest possible base of users.
  2. Understand customer needs by focusing on solving today’s key business problems with today’s technology.
  3. The needs of the office worker are evolutionary.
  4. An automated office information system must be flexible to easily adapt to multiple user types, changing needs of the users, multiple input/output devices, and changing technology.
  5. The system should appear as an integrated whole to the end user.
  6. Office information systems tie into or touch all parts of an organization.
  7. An office system should be self-contained.
  8. The individual user should feel ownership of the system.
  9. The user should be rewarded or acknowledged as an individual by the system.
  10. The value of an automated system is only understood through demonstration of how functions are used to solve current key business problems.
  11. The product development efforts should be directly funded by the customer base on the direct merits of the software.

1. Provide automated information access.

The goal of office information systems is to provide an information access utility for the widest possible population of users. The utility should provide the ability to access data no matter which computer system it resides in, either internal or external to an organization, so long as appropriate security conditions are met. The utility should then provide the tools for the user to turn the data into information. Once the information is created, it should be communicated to the appropriate people in an automated fashion. What is one person’s information may be another’s data.

This philosophy can be summarized by: getting the right information to the right person at the right time.

The user base should be thought of in its largest possible sense, not just the members of one’s own organization but also the other organizations that must be communicated to. Examples would be the exchange of paperwork that accompanies the exchange of goods and services between two organizations (purchase orders, status replies, invoices). In addition, information utilities are increasingly being shared by several organizations (Dow Jones, The Source, and CompuServe).

2. Understand customer needs.

The key to success is the understanding that office information systems are needed to solve the key business problems within an organization. The focus of product development and installation within the office environment should be on the business problem, not the available technology and what COULD be done. More than enough technology exists today to solve today’s problems; what does not exist is an understanding of how to apply the technology to specific customer needs.

Understanding customer needs is difficult to do in practice. Much time needs to be spent listening to the people who will use and have used office systems to understand their needs. Then “working models” need to be built quickly to determine if the users needs were really understood. In general, office users are not very good at stating their business problems so that we may build solutions. However, these users can very quickly observe a working model and determine whether the model meets their needs.

The key to success for the installer of an office information system is to find the key business problems within their organization and then use today’s products to start solving those problems. The key to success of the product developer is to understand enough different specific customer environments, abstract to the general solution for these problems, and then build a product which can adapt to specific user needs.

Early on in the development of systems which would meet customer needs we found that Digital had a superb set of tools which could be applied to the office environment. However these tools were designed for programmers, a very small portion of the overall user community. Our task was then to figure out how to take these general purpose tools and apply them to non-programmers for specific business problems. This application of using existing tools was born of necessity. We did not have the time, resources or funding to invent the perfect solution. We spent our time understanding available products and applying them to current problems.

3. The needs of the office worker are evolutionary.

The information access needs of the office worker are dynamic. Therefore the system must evolve to meet these needs. Our philosophy has been that the customer is pursuing a journey and not a destination. Office automation is really a process not a single product.

The process of accessing information is one that is practical; it is accomplished through trial and error, as there is no theoretical base for how an office worker performs their job.

Office systems need to be available at all times. This capability is actually an extension of the current environment, where the office is only accessible when one is physically in it. The logical extension of an automated system, is that it should be available wherever and whenever the office worker is available. Reliability is implicit.

Users will use the system in very different ways then ever intended by the implementor. User needs change rapidly over time.

Functionality before efficiency. This philosophy falls out of the rapid feature needs that most users find they want when exposed to a good system. These needs are more important than the relative efficiencies of the process. This philosophy assumes that others will provide the needed efficiencies primarily in the base hardware and software.

The system should evolve by adapting to the work patterns of the user. A logging mechanism is critical for this and many of our other philosophical tenants. The system should allow the user to modify the application to adapt to his work flow and where necessary the resulting procedure should become a permanent part of this user’s view of the system.

4. The automated system should be flexible and adaptable.

The automated office system should have a flexible structure and be able to adapt to a wide range of users and wide range of input/output devices. The types of users that the system must adapt to range from novice to expert, secretary to professional to manager. Input may come from a terminal, from OCR, from touch panel, touch tone phone, bar code readers. Output may go to other applications, printers, terminals, graphics or voice output.

Since there is such a variety of users and devices that the system must interact with, general purpose tools were used as the starting point. These tools allow the application programmer to build working models quickly as well as modify them to suit different users or different input/output devices.

Most vendors are supplying the same types of base functions: word processing, electronic mail, calendar, desk management, data processing, and graphics. Each vendor can pull out an impressive feature and function list. However, most of the applications were not designed to work together and the user has to manually move from application to application. Since these systems generally hard code the user interface in each application, the customer is unable to modify the interface to the function or modify how the function performs.

What attracts most customers to ALL-IN-1 is the ability to have a user interface which ties very diverse applications together. At the same time this interface may be customized to suit the local users needs. With this interface, programs and applications which were used by programmers can be extended to office users.

5. System should appear as integrated whole.

To provide a system which is easy to understand and use, the system should achieve a high level of integration. The individual components should appear to work together and be part of the same structure.

Integration can be viewed as a spectrum:

<—- Features —- Common Structure —- Integrated Only System —>

At the Features Only end, applications are developed as independent entities. The user must come back to a command level before accessing the next application. Data or information is not automatically passed from one application to the next. An integrated system would be developed from scratch and would have all applications tightly coupled to the others.

There is a natural conflict between features only and full integration. It is very easy to develop features only and extremely complicated to develop an integrated system. Further, the integrated system is very difficult to modify, as a change in one application could potentially affect all other applications.

Due to the dynamics of the office environment, we felt that a fully integrated system was not possible. Rather, a common structure should be built of independent modules that would have a common flow control for moving information between the independent applications without the user at the terminal having to do so manually.

6. Office systems touch everything within an organization.

Since data arises in almost every part of an organization and outside of it, office systems must be able to access the data to turn it into information and then be able to communicate with all other systems. Likewise, just about every data processing product ever developed interacts with an office information system at some point.

The office environment will be multi-vendor so it is vital that a full function communication architecture be available that allows for easy information transfer between systems.

7. System should be self contained.

As far as the end user is concerned the office system should be self contained. All documentation, help, key concepts, and computer based instruction should be available through the users terminal. The user should not have to have a mass of documentation to operate the system nor require extensive training away from her work environment. The system should be able to start being used immediately. Thus, our philosophy pushed us to develop all collateral material in machine readable format for inclusion with the system.

8. Individual should feel ownership of the system.

The users of the system should feel like they own it in much the same way that they take ownership and personalize their own desk environment. Further, the individual should have the ultimate control of what happens within his environment. Other users should not be able to invade his environment without the permission of the individual. Examples are: incoming mail should not go into the users file cabinet until he says to do that. Meetings should not be permanently scheduled unless the schedulee is aware and agrees to the meeting.

9. User should feel sense of reward.

The user should have built in rewards from the system. The system should acknowledge that there is an individual. Examples of acknowledgment: the user should have the main menu built from a single entry for Computer Based Instruction (CBI) to the total list of functions that he has gained an understanding of by going through the appropriate CBI (avoid the ALL-AT-ONCE syndrome). The system should analyze the log files periodically to detect work flow patterns or the need for more training in areas that the user has not used. If a user is entering orders, the system might come out with an acknowledgment at suitable intervals (every 1 Million dollars) or let the user know how many clean orders they entered.

10. System value understood only through demonstration.

People rarely think about what they do in their current office environment. We have found it extremely difficult to talk about what an office information system is or how it should be used. The users only understand the value, once they see a demonstration of the capabilities. An analogy is how we learn to drive a car. We don’t do it by reading about it, we do it by watching and then practicing.

11. Product development should be directly customer funded.

The development efforts for office applications should be directly funded by customers. This philosophy was born of necessity but has been key to the motivation of those involved. It places an emphasis on using existing products instead of inventing new ones, and keeps the development focus on solving real problems instead of inventing solutions which no one wants. A fallout of this philosophy is a built in marketing test, that is, if one customer is willing to pay for the development, chances are many more customers are, since the generic office needs are quite similar in large organizations.

Posted in ALL-IN-1, Content with Context, Human Centered Design, Idealized Design, Knowledge Management, User Experience, WUKID | Leave a comment

The Making of Enterprise Software – All-IN-1

In the early 1980s I was part of Digital Equipment Corporation’s (DEC) Software Services group in Charlotte, NC.  The unit I was a part of consisted of 10 software specialists and a manager.  Due to the nature of our remote work at customer sites, we rarely saw each other.  Therefore, we always enjoyed getting together for our monthly unit meetings.  Yet, we were always disappointed when each meeting turned into yet another exercise in administrivia.  What should have been a great forum for sharing our learnings about our products and customer needs over the last month, shared problem solving, and new service creation brainstorming, always turned into going over the guidelines for filling out the latest paper form that had come down from the bureaucrats on high.

My officemate, John Churin, and I got so fed up after one meeting that we decided to design a solution to the administrivia problem.  Anything that would ease our pain would be greatly appreciated by others as well.  We determined that what we needed was a way to move the form filling from paper to networked computer systems along with a rich electronic mail and help system that would have more explanation about the forms than you could ever want.  We figured that if we could move the routine communications to email, our manager would know that we’d received and read the material and then we could use face to face time for learning and really creative stuff.  As luck would have it, John and I both had a couple of free days between customer engagements.

We spent the next two days in design mode rapidly iterating between the needs our organization had and the skills that existed between the two of us (real time processing, electronic forms design, database management, transaction processing, and networking).  We also looked at the expense side of what it was costing the company to do things manually with a horrendous amount of expensive multi-part paper forms.  We quickly estimated that it was costing $200 per week per specialist just to do the basic time reporting and expense reporting to support our revenue stream.  These costs included the paper forms, data entry personnel, maintenance programmers, and computer systems.  Not included in these costs were the management and facilities costs that a robust analysis would include.  We were excited just to find that our own unit had $100,000 of expenses associated with time reporting during the year.

While the expense side was one part of the value equation, unfortunately that expense was in another part of the organization.  The only offer that would really fly in our department would be increasing revenue, either in the form of generating more systems integration projects or selling large amounts of DEC hardware.  We were quite pleased with our analysis of the needs and our initial design which would be based on electronic mail.  We realized that if you looked at email as more than just pushing text around, particularly adding a robust electronic forms capability, you could solve a wide range of our administrative needs.

On the “building it” side, we examined DEC’s product pipeline which consisted of large timesharing systems (DECsystem 10s and 20s), small real time computers (PDP-11s), and the recently introduced VAX systems.  While there was only one model of the VAX at the time (VAX 11/780) it was clear that this was the system of the future.  More engineering dollars were going into the VAX hardware and software platform than the other two combined.  We knew that there would be lower end models and higher end models so that configurations of very small and very large networks would be possible in the next two years.  While there wasn’t a lot of application software on the VAX, most of the PDP-11 software already ran on VAX systems.

As we emerged from looking at our own needs, we became aware that what we were really working on was what the marketplace started calling office automation (OA).  The components of office automation were: word processing, calendaring, electronic mail, spread sheets, electronic forms, and data management.  These systems required both local and wide area networking.  Early competitors in the space were Wang, Datapoint, and IBM with a mainframe adaptation (PROFS).  At the time DEC was primarily known for its capabilities in the scientific/technical, medical and manufacturing environments.  However, the company realized that its next level of growth would come from the commercial markets.

Office automation provided a nice entry point in the commercial market because you didn’t have to go up against IBM’s mainframe database processing stronghold.  What was common with all the competitive products was that they had fixed functionality – what you saw was what you got.  Our own needs and a cursory look at other needs showed that no two office automation problems looked alike.  You had to take a customizing and component approach in order to be successful. Since the hardware was relatively inexpensive, you couldn’t require a large custom project to get the system going.

We spent some time discussing what our intent should be.  Did we want to just solve our unit’s administrative problems or did we want to go after something bigger?  If we were interested in building a product, should we form a new company, stay at Digital, or try and transfer up to the central engineering group responsible for building application products?  As we worked through these issues John and I realized that first and foremost we wanted to build a successful product.  We each had built what we thought were products before coming to DEC, but they hadn’t succeeded because we only looked at the coding part of the product effort.  We knew that to be successful, you had to think of a whole product (see diagram) and all the functions required to create the product, get it to market and sold, get it installed, and have it be supported.  We knew that the likelihood of the two of us doing it on our own was slim, but if we could use DEC’s resources then we had a chance of being very successful.  So we decided that our intent would be to build an office automation product within DEC, preferably within our systems integration group so that we didn’t have to relocate.

Reality set in after the two days, when John and I were pulled back to our customer commitments and our support of the sales representatives.  Yet, there was a subtle shift for both of us.  Where before any customer project was as good, bad or indifferent as the next; now we were actively looking for customers which met our intent – building an office automation product.

The design of our product concept changed our worldview.  Now almost every customer or sales situation we touched seemed like an office automation opportunity.  We talked about our ideas to anyone who would listen.  While none of the sales reps or customers looked at us as a first choice, several were now looking at us as a last resort rather than giving up on a piece of business or project.

Our first break came at RJ Reynolds Industries (RJR).  They were looking for a way to replace their aging papertape Telex systems that were cumbersome to use and expensive to operate.  As RJR was expanding their number of office and manufacturing sites, the speed with which they could move information was becoming increasingly important.  We did a half day analysis and realized that our preliminary design would nicely match their needs since this was primarily an electronic mail application.  Then the client gave us a rude awakening.   They liked our ideas but IBM had agreed to give them a systems analyst and a corporate telecommunications consultant for a month to analyze their needs.  We knew we could not match that offer but got the customer to agree to give us a chance to bid on the results of the IBM system analysis.

A month later we got called back in and given a copy of the IBM analysis.  My spirits soared.  The IBM folks just drew a few illustrations and copied some brochures.  I knew that we could do better in a few short hours.  We recently installed one of our new word processors so we could turn out a nice looking document in short order.  I asked if we could come back the next afternoon with the analysis and proposal that we had been working on for the last month (a small white lie, but the work that we would do that evening would look like several months work compared to the IBM analysis).  The client agreed and we hurried back from Winston-Salem, NC to Charlotte, NC.  I phoned ahead to John to get him to start cleaning up some of the architecture diagrams that we created previously.

Drawing on the early design work we created a 20 page analysis with several diagrams and a three page consulting contract to design their system for real, for a mere $50,000.  We took the proposal back the next afternoon and the customer was most impressed.  They never expected DEC to upstage IBM, and to do a free analysis in the process.  The customer agreed to our proposal and the next day sent us the approved purchase order.  This was a first for our region, getting a paid project just to do a specification.  We were off and running.

While John and I were the primary consultants on the project, having real customer dollars allowed us to tap into expertise around the country that we didn’t have.  Also, under the guise of project reviews we received great guidance and criticism of the completeness of our designs.  We went back with well over a 100 page specification and a twenty page proposal for the next phase of the project.  Once again, the customer was most impressed, but then gave us the bad news that RJR was reorganizing and that this project was cancelled.  While disappointed, we now had a very complete specification that we’d been paid for.  We had received real customer dollars without requiring our own company to invest.

In parallel, we worked on consulting projects to Milliken in Spartanburg, SC, a very large textile manufacturer.  DEC was the major supplier for their plant industrial automation. Milliken realized that as they automated processes like the creation of fibers and the making of carpets that their automation had an incredible amount of paperwork that followed the products around.  They were starting to do the prototyping for a plant office automation system and were working with Datapoint on the prototype.  As they showed us what they liked about the Datapoint offering, we realized that with our specification and a little bit of rapid prototyping we could provide a much better system.

While Milliken was not willing to pay for our time to develop the demonstration, the District Sales Manager realized that he could sell them triple the amount of hardware systems if they bought our approach to office automation.  Further, he could keep an unwelcome competitor out of the environment.  So he funded our next activity which was to take our specification and create a demonstration.  We estimated that we could do that in three short weeks.

The other requirement that the customer had that no other competitor could deal with is the necessity for their large engineering staff to be able to add to the base capabilities of the system and to customize the capabilities.  So an important part of our demonstration was illustrating the rate at which new functionality could be added.  Therefore, a part of our offer and needs analysis was to have meetings with the client every three days to demonstrate progress and gain critical feedback on our approach and understanding of their problem.  In three short weeks we had a demonstration of our office automation system and the customer was quite impressed.  They were ready to buy.  However, we still had some important lessons to learn about marketing and sales proposals.

In parallel with our demonstration project, the senior management team of Milliken was meeting with the top management of Digital for their quarterly meeting.  The Milliken upper management knew that the plant office automation team selected Digital as the top vendor, but none of us had thought to brief top DEC management on our capabilities.  So when Roger Milliken, CEO, asked Win Hindle, DEC Executive VP, about DEC’s office automation products and could they see a demonstration of the products, Hindle replied that we had no products in that area.  Further, he told them that he expected that it would be two years before we would have any products in Office Automation and that there were no products to demonstrate.  Talk about snatching defeat from the jaws of victory.  No amount of lobbying on our part could overcome the authority of our own top management.  While we didn’t get the Milliken office automation project, we were selected as the hardware vendor of choice and the customer elected to develop their own proprietary system.

We now had a demo of our capabilities and we took every opportunity to demonstrate that DEC and the VAX computer systems were the best platform for office automation.  In short order, the DuPont sales team from Wilmington, DE, contacted us to propose our system to DuPont.  The DuPont productivity team wanted to do a demonstration project of what OA technology could mean for a Fortune 50 company.  We analyzed their pilot needs, recast our specification and demo, and made a proposal for a project to transform the demo into a scaleable product for $100,000.  The sales team liked it and we started the multi-layer sales process required in old line hierarchical companies.  Finally, we made it to the top of the chain to present to Ray Cairns, DuPont Information Systems VP.

As part of the presales process, the sales rep and I went to lunch with Ray in the magnificent Hotel DuPont, and we talked quite a bit about the history of our efforts and the capabilities of our ideas.  He was an active questioner and probed far and wide about where we’d been and where we expected to go with the product.  He knew that we were developing this capability in conjunction with our customers and then would move it into DEC central engineering.  They liked our unique offer that for the cost of the project, they would have an unlimited use license for the software within DuPont.  Then, if they liked the tools, they would have to buy licenses for the next version of the product.  This offer allowed them to amortize the cost of the project over quite a few hardware systems which made the costs appealing to their financial analysts.  We appeared to be giving up quite a bit of future software revenue, but we were betting that we would have a new version of the product well before they were ready to deploy the software across a lot of systems.  This offer was win-win for both parties.

I relaxed and felt quite good that the decision maker would decide in our favor.  Little did I know what I was in for with the formal agenda with Ray and meeting all twelve of his direct reports.  We had professionally designed 35mm slides to present our story and product ideas.  At the end of the presentation, Ray asked several warm up questions and then hit me with the question that stood me on my heels: “how has this product helped impact Digital’s bottom line, either positively or negatively?”  He knew from our lunch time conversation that the product didn’t even exist, so that it couldn’t have much of an impact.  I knew he wasn’t a stupid man, so what was going on here?

In a flash, it came to me, that he wasn’t really asking about DEC, he was using me as a convenient foil to get critical education across to his management team.  I mumbled a few things about our unique approach to developing application software in conjunction with a customer.  Then, I turned the question around to the DuPont management team and asked them how they thought this product might affect DEC’s bottom line.  It is much easier to speculate about the cause and effect in someone else’s organization when you are at a level of optimal ignorance than to do speculation about your own organization.  What ensued was a great one hour conversation about the implications of such a product and technology on a large, complex organizational system like a Fortune 50 company.

After the meeting, we were awarded the order and we now had the funding to take the ideas of our specification and our demonstration into a full blown product.  The learning for me in this meeting was quite revealing.  Our way of approaching the selling of our ideas to individual contributors and middle managers was the traditional sales pitch of features and benefits.  What Ray Cairns made clear is that at a certain level of management, the rules change and the offer must move from features and benefits to second order implications.  In our case, what effect would buying our software have on both the revenue side of DuPont and the expense side of DuPont?  In order to answer that kind of question you have to move from the product under study to the system under study.  In particular you have to look at the interactions of an entity with its environment.  In later years, I would come across the third order thinking which is how will this product or service change a company’s valuation either positively or negatively.

Gordon Bell, DEC Senior VP of Engineering at the time (now at Microsoft Research), noticed a heuristic that no product ever made it out the door by requiring more than three new insights or inventions.  During the specification and rapid prototyping stage we hit on two core insights – that email was the core engine and that emailing should have multiple data types, not just text.  Yet there was still something missing as we moved into the implementation phase.  We couldn’t figure out how to implement our ideas without having to rewrite parts of the VAX/VMS operating system, which neither of us wanted any part of modifying.  The whole excitement of the VAX architecture was that we could build as complete an application as we were looking at without any systems work.

One day a happy accident occurred when all of a sudden the contents of my VT100 screen changed.  I was editing a file and all of a sudden a data entry form appeared on my screen.  It was eerily like Alexander Graham Bell yelling “Watson come here!” through his new invention – the phone.  The joy was realizing the power of the messaging architecture already built into the VMS Operating System.  While a bit esoteric, what it meant is that our system could be built on messaging from top to bottom and be fully recursive.  With a few architectural rules, the system could generate a wide range of custom applications.

We expanded our team by three and developed the code, installation procedures, documentation and training for the pilot site at DuPont.  During the process, I learned several invaluable lessons about engineers and product quality.  Any problem an engineer experiences directly gets fixed immediately.  The difficulty is that most critical problems in an application are rarely used by a software engineer.  So they never see the problems and the problems never get fixed.  Not only did I have to see the product opportunity at DuPont, and develop it, but I was also the person that went to install the software and train the users.  The things that worked fine on our development computer didn’t work fine on their computer system.

No amount of asking, cajoling, screaming or yelling could get the engineers to fix the problems. Unfortunately, the problems were in an area of the code that I was unfamiliar with.  Through shear desperation, I had John make the next trip to Wilmington, DE, to install the latest update.  Imagine how I had to keep from laughing (or crying) when he came back after the trip and asked me why I hadn’t told him about all the problems.  Magically, they were all fixed within a couple of hours.  Direct observation of problems or activities is worth far more than an abstract narrative.

Shuttling back and forth got old pretty quick so we created the next product support innovation – direct connection of our development network with their pilot prototype network (remember this was 1980).  The connection improved the reporting of problems, the suggestion of improvements, and the quick delivery of new software and documentation.  The beauty was that we were able to charge more for this service as it enhanced both companies capabilities.

We were really moving now.  We had a product (in today’s terms it would be a beta of a product), we had a paying customer, and we had a development staff.  Life couldn’t be better.  However, our intent was to build a successful product business.  John and I had gotten this far before.  How did we get farther?  It was time to put a distribution channel in place and find a set of complementors.  Little did we know we’d find them in the same place.  About this time, the Corporate Software Services Organization made a proposal to the corporation that we could grow from a $60 million business to a $1 billion business in less than five years.  One of the core assumptions was that there were already systems integration projects that DEC retained the intellectual property rights to and therefore could turn them into application products.  The ALL-IN-1 product effort became the test case for this idea.

The first thing we needed to do was to train some of our best consultants on the product.  The training was scheduled, but little time was allocated to prepare materials for the class.  On the Sunday night before starting the training, I was looking at only a four hours worth of material for a five day class.  I had no idea as I started lecturing Monday morning how I was going to make it through the week.  Through desperate acts, brilliance emerges.  During that first morning’s presentation, I kept getting questions about whether the product had this feature or that feature.  Could the product be used to solve this customer need or that customer need?  As I started to answer “No” to each of these and feel quilty that we didn’t have those features or capabilities in our product, I realized that each of the things they were asking for was easy to produce with the core functionality that we did have in the product.  So now the answer to each question became “No, we don’t have that capability yet, but you now have your homework assignment for the afternoon.”

As they broke for the afternoon laboratory, we had more than enough enhancement requests to go around to each of the “students.”  The students were all excited because they were getting to work on something that they cared about and in most cases they knew exactly which of their customers would be interested in that capability.  Some of the features required the core ALL-IN-1 engine to change.  So John would work with those students to figure out a good operational split between what the application engine would do and what the ALL-IN-1 applet should do.  They worked together to define what the interface or syntax between them should be.  With 20 students and a week’s worth of exercises that all arose from their questions, the product grew by leaps and bounds before our eyes.  This was our first experience with the power of an open architecture and a large group of collaborators.

The second order of brilliance was letting the students know that we would include all of their functionality in the next product distribution tape and we would give attribution to each of them for the parts that they contributed.  With that simple context switch, the afternoon exercises took on new significance.  Knowing that their work would be distributed, they didn’t just do something that demonstrated that the function could be done.  They switched into engineering mode and thought through the general case, implemented the functionality, tested it under a variety of customizations, and documented the features.

The next week we were teaching a group of 20 DuPont systems analysts who would be moving ALL-IN-1 beyond the pilot environment.  We taught the class in the same format.  In the morning, we lectured and explained the different capabilities of the system.  As questions arose about whether the system could do this or that kind of application, those questions became the person’s laboratory assignment for the afternoon.  Very quickly several different departmental systems emerged – Human Resources, Plant Office Automation, Order Processing, Strategic Planning – and we made the same offer to them.  We would include their efforts on the distribution tape internal to DuPont and external to DuPont if the feature were generic and give attribution to the person who created it.  Unknowingly what we did was create 20 disciples who couldn’t wait for the next Monday morning to demonstrate “their” creation to their peers or their internal clients.

Within three years, largely through the efforts of the 50 DuPont analysts who went through these courses, DEC installed $450 million of ALL-IN-1 systems in Wilmington County Delaware.  Largely through the efforts of the first 100 DEC software specialists we trained in this manner and the sales representatives they converted, ALL-IN-1 systems became a $1 billion a year business for DEC for 18 years in succession.  When I visited Health Partners in Minneapolis, MN in 1999 and found that they were still using ALL-IN-1 as their primary corporate electronic mail system, I was filled with awe and much trepidation (since the system was quite dated being in maintenance mode for the previous 14 years)

The last of the core lessons we learned was the power of tailoring far beyond what we’d anticipated.  I was asked to speak at the DEC European Software Services Meeting in Majorca in 1982 to describe what we were doing in the U.S. with Office Automation.  I gave an overview of the product and the kinds of customer solutions we were generating AND the revenue we were creating.  This got their attention as they were going through a revenue shortfall at the time.   The immediate set of questions I got were around translating the product into each European language.  It was an “oh, of course” type of question for me, but was unheard of in products of the early 1980s where the user interfaces were hardwired into the software code.  I shared “sure, it should be no problem, if you want to localize the interfaces, just change the ALL-IN-1 interface forms using the standard VMS tools.  All of the messages that we present on the screen are forms;  just go ahead and change them.  And we’ve adhered to all the VAX VMS coding conventions so all the National Replacement character sets should work.”

It was like I ignited a bomb.  Everyone came out of their seats and started throwing a hundred questions at me.  Seizing the moment, David Stone, DEC’s European Software Services VP, called a half hour break so that the attendees could get their questions answered informally or for country management groups to caucus on what this meant for their business.  He then called the whole group back to order and gave them a challenge.  Based on what they heard did they think they could make up any part of the $100 million revenue bookings shortfall Europe was projected to have for the year.

He first asked each country group to gather together and develop a straw plan for using the ALL-IN-1 product to address the revenue shortfall.  In the process, the groups were to identify any issues, questions or concerns that they had and they could pose those to me in their group presentations.  Each group went off and spent ninety minutes discussing the opportunity.  Then each of the ten country groups made their reports and identified their issues.  As each group reported, I went through my large library of 35mm slides to find the slides that addressed each issue or question.  I got David to stall for a few minutes after the last group presented so I could arrange the slides in some semblance of presentation order.

I then stood up and gave a “custom” presentation addressing all of their issues.  The management team was really excited now.  Until that moment I had not realized the power of a “custom” presentation to go along with our easily tailorable product.  The group correctly sensed that this product was quite real and could be adapted for each country’s unique business needs.  Until now, Europe had to take a one size fits all set of products that were English language and US culture centric.  Long delays would occur to get even minimal localization done for each country.  Now they’d found a product that could be introduced Europe wide in each natural language at the same time as the US introduction.

David Stone, then asked the country groups to get together with this new information and come back with a “committed” forecast of how much of the revenue shortfall they could make up with this product and what they needed to do to make this program happen in Europe.  The groups quickly formed and in 15 minutes came back with their commitments.  The commitments totaled $120 million.  David scaled the numbers back to total $100 million to mitigate the exuberance factor.  I was blown away and now fearful for my life.  I wondered what would happen if they all woke up and decided they’d been railroaded into a commitment in the bliss of the moment.  I figured they’d shoot the messenger – me.

The group then started to work on the marketing program to make their commitments happen.  Using the natural creative competitiveness of the countries Stone broke them into their country units to develop logos for the $100 million in 100 days campaign.  Several of the teams found a local T Shirt design shop and had their logos emblazoned on T Shirts for the attendees. Then he organized cross-functional and multi-country groups to develop:

  • the 20 page newsletter that would go out to all sales and software personnel in Europe,
  • the training program for sales and software personnel,
  • the localization teams to translate ALL-IN-1 into each country’s natural language,
  • other applications that could be combined with ALL-IN-1 in each geography.

By the end of the four day meeting, the Europeans now had a major new product that was their own, a marketing program that they could roll out, and a new found respect by their sales peers.  I was asked to stay over another week to train the software consultants who would be doing the translations.  My ask of the software consultants was that each country supply a software consultant to come work with our development team for a month at a time to make the core ALL-IN-1 engineers aware of multi-cultural needs.  Within the month they had ALL-IN-1 translated into 10 different languages and cultures.  Within the 100 days they’d exceeded their goals and in fact did $120 million of additional business.

In David Stone, I saw and appreciated a master at management leadership and motivation.  For the next year I spent as much time in Europe as I could to learn about changing the behavior of a large organization.  I asked David how he knew that my presentation would set off so much energy.  He laughed and said that he had no idea that it would.  “What I do is schedule an agenda that has as much informational diversity as possible running the gamut from product information, service information, corporate strategy, engineering strategy, organizational behavior change stuff, and management education.  I never know which of these will cause an energy hit with these 150 managers, but I’m confident that one of them will.  When that energy resonation occurs between the audience and the speaker then I go into action.  I’m not good at generating that energy.  However, I know how to move energy into action; that’s what I’m really good at.”

We succeeded so well in those first several months at promoting and selling our version of Office Automation that the company now had a dilemma.  Taking a portfolio approach, DEC had six different central engineering groups working on office automation projects.  The corporate powers that be met to decide which group would win and be anointed as THE office automation product.  There were pluses and minuses to each product and the capabilities that each would add to the VMS product line and the push into the commercial market.  But in the end, there was no clear winner at the features and benefits level.

Peter Christy was chartered by DEC’s executive management with evaluating all of the different DEC OA options.  Here are some excerpts from his evaluation.  Peter would have made a good economist with his report that was of the variety – on the one hand and on the other hand.  What I would have given for a one-handed Peter Christy at this juncture.

“The VAX/VMS office automation system developed by the Charlotte, NC, District Software Services office has been studied. Under the goals set forth in the “Grey Saturday” meeting, the Charlotte software is not an alternative to EMS/VMS. On the other hand, the Charlotte software deserves careful examination on its own merit. It offers new ideas on the design and marketing of office automation into large enterprises (e.g. Fortune 100 companies). . .

Recommendations:  The Charlotte software seems like an excellent office automation system, both to those of us that went and looked at it, and to the customers that the Charlotte people have been dealing with. Unfortunately, there is no simple reason to bring the software in and base product development on it. The most reasonable approaches today seem to be to begin a parallel development based on the software (which requires additional resources) or to continue to use the software for leveraged field consulting, with some effort expended to assure reasonable comnunication between field and home developers. Most critically, we need a broadly accepted decision on how to position this software.  The software is too attractive to just forget; it’s too dangerous to treat passively.”

However, only one of the six product alternatives had generated any revenue for the company – ALL-IN-1.  That made the decision quite easy in the end.

The implications of the decision were not so easy for me.  I was asked to manage all the groups that “lost” and meld them into one group producing the next version of ALL-IN-1.  Suddenly, I would be moving from managing 10 people to managing 250 people in five different organizations, in three sites, in two countries.  But that is a story for another time – global product management.

Gordon Bell was instrumental in getting ALL-IN-1 accepted as the primary office automation product development effort.  He sent the following memo in 1982:

SUBJECT: THANKS FOR ALL-IN-1 DEMO. LET’S BE #1 IN THE OFFICE BY SALES

“I thoroughly enjoyed the demo and interaction on ALL-In-1, and anxiously await to use it on our own VAX. It’s an impressive set 0f tools for the office and the basis for others to build additional tools that work together in a system.

“It is most urgent to get books out on it of the form:

1. A user’s manual that has the whole thing including DECmail for the heavy duty office environment for professional and secretary.

2. A tutorial on office automation where we feature it as being “this is what office automation is.”

“I hope that we might be able to help with the writing somehow.

“It should be noted that you folks have created a second generation office automation system. Because you adhere to the principles of VAX/VMS regarding providing an extendable fully compatible environment, versus just having a set of independent tools, you have the second generation.

“WE SHOULD ALL NOTE THAT A SECOND GENERATION SYSTEM IS PROBABLY NOT BUILDABLE ON ANY OF THE EARLIER SYSTEMS (eg. IBM 370, Prime, HP) . . . or for that matter, RSTS, RSX and Tops due to the difficulty of addressing and lack of common data dictionaries, etc. From my perspective, this is one of the few applications I’ve seen that begins to utilize VAX the way it was supposed to be.

“My only concern now is getting it marketed. This is a great product to go cream the IBM System 38, Wang and other parts of IBM with and to become number one. We must get market share while it is possible and the other folks get their products.

“As Pogo said: ‘We’ve met the enemy and he is us.'”

The exciting part of being selected to build ALL-IN-1 was that we quickly went from nowhere in the OA space to being one of the front runners.  Patty Seybold of the PSGroup put us on the front cover of her monthly report in 1983:

“As we move into the ‘big league,’ analyzing some of the product offerings which might be appropriate as the basis for the implementation of a large-scale organization-wide office system (such as Honeywell’s OAS, IBM’s PROFS, Wang’s Alliance/VS, DG’s CEO, Prime’s OAS, as well as the offerings from Datapoint and HP), we begin to look at a different set of criteria from those we used heretofore in evaluating smaller, dedicated systems. An organization-wide office system must, above all, excel in the areas of data communications and networking. Ideally, it should integrate well with the existing data processing, data base, and management information systems that are already in place within an organization. And it now appears that such large-scale integrated office systems must also interface serenely with the desk-top tools that managers and professionals will be using: personal and professional computers.

“Frankly, it was this third aspect—customizability—that attracted us to DEC’s All-in-1 system. We believe strongly that every office environment is different, since every business has different problems to solve. We also feel that each user of an office system should be able to mold the capabilities of the system to his own personality. DEC’s All-in-1 offering purports to do just that. We wanted to find out if it really did. . .

“What does the future hold?  We think that DEC will ‘pull it off’ – become a major office systems supplier, on a par with Wang and IBM, at least in part because DEC customers will insist that the vendor take an active, leading role in helping them determine their own strategic directions.”

During the process of developing, marketing and selling ALL-IN-1, I met so many wonderful professionals like John, David, Gordon, Peter and Patty.  The eight year ride of creating an innovative Office Automation System remains one of my top three innovation experiences.

Posted in ALL-IN-1, Content with Context, Dilbert, Human Centered Design, Learning, organizing, Relationship Capital, Value Capture, Working in teams | 22 Comments