Visualizing the Taste of Wine – Shape Tasting

A couple of years ago, my wife and I attended a wine blending seminar put on by one of our favorite winemakers, Anna Matzinger of Archery Summit Estates. As Anna was describing the art of wine blending, she commented:

“As I’m tasting I’m always thinking in terms of shapes. Tasting is also a visual experience for me. Thinking of the palate in three dimensions, how is the blend creating a shape on the palate in my mind? I will usually sketch the shape of the blend so that I can remember the smell and taste that I want to achieve. The sketches also help me compare blends across different years.”

Anna punching down the Pinot Noir

It took me a few minutes to realize what she’d just said – she tastes the wine by thinking in shapes.  I immediately asked Anna if I could come interview her about her visual pattern language for wine tasting.  She kindly agreed and I showed up at Archery Summit for an afternoon’s education I will not soon forget.

As we started the interview, Anna shared her frustration with the language of wine that is fostered by other winemakers and wine writers.  She continued:

“The words that everyone uses help to sell the wine, but they don’t help me to remember what a wine tastes like all through the process of fermentation and then through the process of aging.  When I would come to the next vintage and look at my traditional notes from previous years, they didn’t help me to remember what last year’s vintage really tasted like.  Then I started noticing that my way of remembering is visualizing the shapes of the wine on my palate.  I started sketching the shapes in my notes for each wine from each vintage.  Now it is easy for me to go back and compare wines at different stages of fermentation and across vintages.”

Like all good conversations with a winemaker, sooner or later you end up in the barrel room to demonstrate what you’ve been talking about with a taste of wine. I had a great time tasting through different vineyards and watching Anna sketch the shape of the wine against her palate.  The joy for me is that several terms that wine writers use all the time finally became clear like “fruit forward” or “long drawn out finish.”  By first sketching the shape of one’s mouth opening and then using symbols to represent types of flavor profiles and where they occur on the palate, I could see the taste of the wine.

Archery Summit Barrel Cave

As a mostly visual person, I was so excited to finally have a language to understand and describe the wines that I’m tasting.

I asked Anna how other wine makers respond to this visual language.  She replied “Oh, I never share my sketches.  Other wine professionals would think I’m crazy.”

I looked forward to writing up my notes, but never took the time.

A few years later, I spent some quality time with Patrick Reuter of Dominio IV wines.  In the closely connected world of Oregon wine, Patrick is the husband of Leigh Bartholomew.  Leigh is the vineyard manager for Archery Summit Estates.  Together, Patrick and Leigh created Dominio IV and bought some property (Three Sleeps Vineyard) along the Columbia River to grow Syrah, Tempranillo, and Viognier.  Leigh’s parents live on the vineyard property and tend the biodynamic grape growing for Dominio IV wines.

Patrick started talking about shape tasting of his wines. Immediately, I asked if he and Anna Matzinger had ever compared notes. He laughed and shared that Anna had gotten the idea from him. Patrick agreed to spend an afternoon with me demonstrating how he goes about shape tasting.

At the time of the shape tasting observation, Dominio IV was still located in the Carlton Winemakers Studio.  We took over a table in the tasting room and Patrick brought out a Tempranillo that he was in the process of figuring out a final blend.  We both tasted the wine and made some initial comments on how we thought the wine was maturing.  Patrick then brought out his sketch book and I was overjoyed to find that he had a rich palette of shapes and colors to create his image of his wine tasting palate.

Patrick's Shape Tasting Sketch Book

Along with his sketch book, Patrick pulled out some examples of the informal shape tasting notes he takes as the fermentation and trial blending go merrily along.

Shape tasting blending and vintage notes

As we now start diving into the Tempranillo blend for shape sketching, Patrick illustrates on the top colored part of the sketch the flavors he is tasting and the sequencing in his mouth of the flavors.  In the middle of the sketch he makes a vertical slice view of the palate.

Midway through the Tempranillo Shape Sketching

After drawing the flavor profiles and the shape of the palate, the final shape sketch emerges:

Final Tempranillo Shape Sketch

As Patrick evolves the shape tasting, he is putting together a seminar for sommeliers and other winemakers. He likens the shape sketching and tasting to the concept of synesthesia – which is a neurological condition in which stimulation of one sensory pathway leads to automatic involuntary responses in another pathway.  An example of the phenomena is when someone perceives letters or numbers by color:

Synesthesia

At the start of the presentation, Patrick provides examples of his symbol and icon set:

Tasting Shapes and Symbols

With these shapes, Patrick then provides an example of a shape tasting of the Dominio IV 2008 Pinot Noir “Pondering Ptolemy.”

2008 Pinot Noir "Pondering Ptolemy"

Periodically, Patrick teaches private sessions at the Dominio IV winery on how to develop shape taste profiles.  He starts by sharing one of his shape tastes of a current Dominio IV wine:

Shape Tasting Example

After going through some examples and explaining some of the symbols, Patrick pours one of the current wines and has the “students” practice their shape tasting drawings:

Practicing Shape Tasting Sketches at Dominio IV Tasting Room

In addition to being a superb winemaker, Patrick is creative in the naming and describing of each of the wines he produces.  The shape, the label, and the back of the wine bottle text provide a rich multiple media view of this 2008 Pinot Noir.  Of course, the wine smells and tastes fantastic as well (particularly in the right Oregon Riedel Glass).

If you are looking for an immersive wine education and fine wine tasting unlike any other, visit Patrick Reuter in his McMinnville, OR tasting room for a shape tasting.  For the visual thinkers among you, wine will never taste the same again.

Posted in Human Centered Design, Travel, User Experience, Visual pattern Language, Wine | 6 Comments

Everything Old is New Again

I live for good questions which cause me to stand back and have to think and reflect.

While attending Professor David Socha’s UW Bothell CSS 572 course on Evidence Based Design, during a break David asked me to compare and contrast our experience with the success of developing DEC’s ALL-IN-1 to the Kleiner Perkins 10 Criteria for iFund Success:

10 Criteria for iFund Success from Chi-Hua Chien

Chi-Hua Chien shared these criteria as part of his presentation to a Stanford iOS class. My first response to the question was to laugh at the absurdity.  ALL-IN-1’s design center was on dumb video terminals and aimed at Enterprises in 1981.  Mobile smartphone applications are at the complete opposite at end of the spectrum of rich multiple media portable communications.

However, with David’s wonderfully curious encouragement we went through the ten criteria.  While we had to change some of the terms like “iPhone” to VAX/VMS platform and “mobile” to “remote access,” the criteria were surprisingly timeless in their applicability.

As I was dragging myself through the cobwebs of my memory, I shared with David “did I ever tell you that with ALL-IN-1 we actually ‘invented’ what we know today as Javascript?”

David looked at me like I was crazy and said “What are you talking about?”

While we never emphasized it, for a number of design reasons (primarily for things like calendaring and forms data entry), each email message was an actual program, not just a text stream. We took each email entry and wrapped it in a program.  So we were really emailing application programs around.  We had to invent our own scripting language to wrap around the email text.  The scripting language looks surprisingly like what we know as Javascript today.

As I said those words, the impact of what I was saying in the context of the digital humanities discussions hit me between the eyes.  I’ve struggled to articulate the design essence of a new medium of communication that is emerging from my understanding of the thoughts that Kate Hayles has imparted.

In the iOS video, Chien makes the observation that the content world of the web is morphing into the “Context over Content” app world of the iPhone/iPad.  In other words we are transitioning from a content world (books, web pages) to an app world and from a static content world to a dynamic and highly personalized and contextualized world.

Kate Hayles pointed out that at many levels this mediated world of text was an old development as William Blake wanted tight control of the publishing of his poetry so that he could publish the “text” of the poem embedded within his art (see post on Digital Humanities).

William Blake Poem within Art

In a more recent form of this print rendition, Nick Bantock embedded his story of Griffin and Sabine as an illustrated “correspondence”:

Griffin and Sabine Mediated Conversation

Thinking back to what we did with ALL-IN-1, I wondered whether we could bring back the notion of an email message as a dynamic application where I could bring art to the medium of business communications. Business communication could now be a fully mediating object of socialability.

I then remembered what Harvey Brightman shared for his technique for providing quick feedback to his students for their assignments – just record his comments rather than having to spend a lot of time trying to type his comments.  He was four times more productive speaking into the computer than trying to type his response. More importantly, the students loved the verbal feedback much more than the flat text. For myself, I find reading documents from people I’ve met face to face takes on a different meaning.  As I read, I am able to hear their voice in my head, not a generic voice.  What if we audio recorded the key components of our business communications so that the recipients can hear the communication in our actual vibrant voices.

Synchronicity struck once again.  I’d arranged to meet Sylvia Taylor the morning after David’s class to gain a better understanding of her intentional work in energizing teams and developing leaders through her work with the VisualsSpeak image kits. She showed me an example of an online/offline use of the tool:

VisualsSpeak "Making your Training Stick"

As she described how she used the photo images and the online version to coach her clients, the vision of the future of business communication crystallized. What if for each business email (or proposal or plan) we sent, we really sent an interactive app (that could come alive on an iPad) that was composed of the text, the images that represented a visual rendering of our ideas, and an audio track that captured our key points and actions in our own voice.  The email “app” would then become a container for the social commentary of the recipients (in text, visuals, and audio). The key design aspect is that instead of the content objects being put together sequentially like this blog, it would look more like the Image Center example above – with the text, visuals, and audio integrated into a dynamic form. With this simple transform we would realize on a minute by minute basis what Stan Davis talked about in The Art of Business.

While I was fascinated by the photographic images, I remain impressed with the notion of Team Art from the McCarthy Bootcamp where I first met Sylvia.  I asked Sylvia if the VisualsSpeak founders thought about using artwork. Sylvia chuckled and pointed me to co-founder Christine Martell’s blog about “Making a Difference with Art Every Day.”  She also showed me a gallery of Christine’s art, some of which was created on the iPad.

I can’t wait to dust off our ALL-IN-1 software designs and bring them into the mediated world of iOS and Android.

What if the critique or responses to our business communications had the ability for the recipients to participate in the team art process of adding to the artwork that came with the text message?

What if we could make a difference with art in our business communications every minute with our intentional emails and business communications?

Posted in ALL-IN-1, Content with Context, Human Centered Design, Photos, User Experience | 5 Comments

Sifteo Siftables – So Near, So Far

A couple of months ago, my user experience researcher daughter, Liz Shelly, sent me an email asking if I’d see the Sifteo Siftables.  She was walking to lunch in the Financial District of San Francisco and came across some Sifteo employees demoing the product on the sidewalk.

When I went to the Sifteo website, I realized I’d seen the TED video of David Merrill demoing his siftables last year.  I had made a note to find the toys when they came out. On Wednesday I came across an ad for them and immediately ordered them from Amazon. They arrived Friday and I’ve been tinkering with them ever since.

The out of box experience is unremarkable.  However, just picking up one of the Sifteo cubes is just so tactile and cool. To imagine how much of a complete computer is in this little square – CPU, Memory, wireless network, video display, and sensors.  The minute I saw the website, I got excited thinking about how I could use this as a basis for a class project in my upcoming UW MBA class on Designing for Demand.

The cube has everything to be at the forefront of user experiences in three dimensional tactile environments, rather than sitting in front of a glass screen.  Since it is simpler than a smart phone, it might be easier to work with for business students new to design.

Alas, I am too early in the product cycle for what I want to do with the Sifteo cubes.  The current design center for the cubes is for children as advertised.  I can imagine that they would be a lot of fun and quite engaging for that age group.  Just not so much for what I hoped to do with them.

The “product” (it is just so hard to think of these as a product as they are such a cute and cuddly toy) comes out of the box and everything went as advertised with hooking into my computer.  I created an account on the web and downloaded the Siftrunner software.

I loved the pictorial way that the software figures out which cube is which – you just press the picture on the cube that matches the image on the screen.

I downloaded the Get Started game and was in for my first surprise – in order to play with the blocks you need to be within wireless range of the computer.  The sound comes out of the computer, not out of the Sifteo cubes.  Darn.  I was really hoping I could carry the cubes around in my pocket and be the first “kid” on my block to pull the cubes out and play a game in front of my colleagues.  Not without a laptop.  Bummer.  I can’t wait for the iPhone/iPad interface (I can only hope).

Get Started Game

The Get Started game does a great job of showing you all of the capabilities of the cube – from the push button controlling of the action, to docking of the cubes together, to showing how the position in 3 space affects game play. So much capability. The following photo is the setup for using the tilt function of the cubes to get the hat back on the head:

Get Started

I then downloaded another game – Planet of Tune – that is about making generative music.

Planet of Tune

With each game it is weird to start the game play by clicking PLAY in the web browser. Several of the games then have some weird gyration you have to go through to arrange the blocks to get the game actually started.  It probably helps to be a child.  For me, I just keep rearranging the blocks through trial and error and then something finally happens. Unfortunately, I can’t remember what it is that gets things started.

Planet Tune Startup

Planet of Tune is kind of cool as it has a record function so that you can replay the generative music you create.  However, it is still weird to have the music coming from my computer speakers (displaced away from where I’m playing).  The cognitive dissonance of having the blocks in one place and the sound coming from somewhere else is hard to get used to.  On a laptop, the tinny sounding speakers make it worse.  I suppose I could use headphones but that makes me even more connected to the computer.

The business model for Sifteo is clear – the games store.  At least you have some point credits when you get started so you don’t have to spend real money to get a game to get a sense of what the cubes can do.

The most disappointing part of the Sifteo system is that you can customize very little.  As a non-programmer all you can do is display different text or numbers on some of the sorting puzzles.

Sifteo Creativity Kit

Old Man Shaking his head "Nope!"

In order to do anything else you have to download the developers kit and start programming away in C# and .NET.  Unfortunately, those are not capabilities I possess any more.  I was hoping there was some form of intermediate language or tool. I really would like to see their API but have not figured out how to get that out of the download or on the website.

But the cubes are just so cool. I just want to carry them in my pocket and click them together like the stress relieving Chinese Exercise balls.

Lori Emerson on her blog provides a great review on using the Sifteo cubes in a humanities classroom.  From my recent research into digital humanities, I wanted to implement some version of the Zero Count Stitching generative poetry that John Cayley developed.

Not being able to customize the Sifteo Cubes, I decided to track back David Merrill and his research work at MIT. There are more than enough publications here to keep me busy for the rest of the weekend that I’d allocated to playing with the cubes.  My favorite is “Make a Riddle and Telestory: Designing Children’s Applications for the Siftable Platform.”  As I poke around some more, I see that David was involved with Alex Pentland and his sociometer research.  I am a big fan of Pentland’s book Honest Signals: How They Shape Our World.  I’ve been waiting to try out a sociometer for several years.

Oh well, back to the Sifteo Cubes for some more serious play.

Posted in Human Centered Design, Software Development, Transactive Content, User Experience | Leave a comment

Teams versus the Professor/Student Learning Relationship

Once again, I was reminded of the power of a committed group of good thinkers to generate insights with problems that have been bugging me for a while. Six of us got together yesterday to gain a preliminary understanding of whether we might be a collaborative group to figure out how to scale powerful forms of adult experiential learning.

Prior to this meeting, I’ve been spending a lot of time researching (see Cathy Davidson), experimenting, attending seminars, and sitting in on other professors classes to identify a better way to help students learn and rapidly translate their learning into meaningful action.  For a long time, something has bugged me about standing in the front of a lecture hall (with fixed chairs and tables) as the “professor” who will somehow enlighten the “students.”

I am reminded of Mark Twain’s comment “College is a place where a professor’s lecture notes go straight to the students’ lecture notes, without passing through the brains of either.”

Long ago someone shared that the person who learns most in the classroom is the teacher. Selfishly, this accelerated learning as a “professor” is one of the many reasons I get so much enjoyment out of teaching.  Yet, in reading Cathy Davidson’s Now You See It, she provides evidence that having the students teach accomplishes accelerated learning – for both the student and the instructor.

As I was listening to my colleagues talk about the need for trainers and train the trainers and certifying trainers in order to develop great teams, the light bulb went on.  Here we were talking about how to produce great teams, but we still set up the distinction between Master and Student.  Our whole framework is the thousands of year old, master apprentice model of learning.

I realized that I had experienced another model of learning that involved teams of teams of teams with the focus on creating powerful groups of 3-7 member/leaders.  Twenty years ago, family friends invited my wife and I to live a Cursillo (Spanish for short course) weekend in New Hampshire. [Note:  The Protestant variant of Cursillo is Walk to Emmaus.]  Through the experience of the weekend, I realized that the transformative power of the accelerated learning was happening by this wonderful focus on teams and teams of teams.  I spent a lot of the next ten years finding out as much about the Cursillo Method as I could and had the gift of participating as a team member in many different environments in New Hampshire, Massachusetts, and Washington State.

The brilliance of the model is that starting with the invitation to participate in a Cursillo small group or weekend, the man or woman is treated with extraordinary respect as a leader.  It is only for the three days of living the first weekend that you are casually identified as a candidate.  Yet, as you look around the room of between 30-60 men, you have no idea who is a candidate and who is part of the “presenting” team.  We are all together working as a collection of small group teams to understand the talks (Rollos) through discussion and team art and the sharing of each.

The primary goal of the weekend is to create a vibrant, shared vision of what the Ideal leader is.  The structural DNA of the weekend is Piety, Study and Action.  Piety is our way of being in the world in the context of our shared vision of the Ideal.  Study is what we do to learn more about the ideal and our environment.  Action is how we put our piety and study into transforming our environment into something great.

The recommendation is for the Cursillista (candidate who has lived a three day weekend) to continue the sharing and learning in a Post Cursillo world of your small group of 3-7 men or women.  The structure of these weekly meetings over coffee or a meal is elegantly simple, yet powerful in the impact.

  • During this last week, how was I most like the Ideal? (Piety)
  • What did I learn? (Study)
  • How did I put my learning into Action?

After each team member has shared about the previous week, the questions are repeated with the time frame being the coming week for how each member will do Piety, Study and Action.

Everyone is a leader and everyone is a part of at least one team.

What if we could transform our classrooms into teams of teams of leaders?  This restructuring would certainly meet Cathy Davidson’s and Kate Hayles’ primary quality of a learning environment – collaboration.  What if we could live in a world of collaborative leaders?

What are your experiences of how to reliably produce great teams producing great results?

Posted in Content with Context, Human Centered Design, Learning, organizing, User Experience, Working in teams | 1 Comment

A Little Strategic Networking to Finish the Week

Not often enough my schedule conspires to present a strategic networking day.  An excellent article in the Harvard Business Review “How Leaders Create and Use Networks” makes the distinction between Operational Networks, Personal Networks and Strategic Networks. Most of our time is spent working within Operational or Personal Networks. The authors present the importance of spending significant time creating and nurturing your strategic network.

Harvard Business Review Strategic Networking

After reading my blog post on “Cameron Crazie for a Night”, Kevin O’Keefe of Lexblog made a virtual introduction to Buzz Bruggeman. Buzz is another Duke Basketball fanatic and we exchanged several emails.  We decided to get together for lunch and share both our love of Duke basketball and our respective love of creating software products.  Buzz pointed me to a recent CBS sports story on why Duke students weren’t attending games like they used to which explained why I was able to get into the Duke – Wake Forest game so easily.

Buzz Bruggeman

Buzz is one of those unstoppable forces of nature when it comes to being enthusiastic about whatever topic of interest is on the table. As a former real estate transactional and litigating lawyer, Buzz got wound up sharing his worldview and I enjoyed taking copious notes during the next two and a half hours of his non-stop story telling.

In the synchronicity department, it turns out I lived in a Freshman dormitory (House H) where Buzz was the housemaster a couple of years later.  When I received by BS degree from Duke in 1971, Buzz was graduating from the Duke Law School.  We both had many great memories of those years (even though it was the sixties, as Robin Williams said “if you remember the 60’s, you weren’t there”).

In the strategic networking sense, I was particularly interested in how a successful lawyer got involved in a software startup company – ActiveWords.  Buzz shared the tortuous journey from founding/funding the company to ending up in Seattle, WA.

As a collector of great questions, I really liked the question that led to the development of ActiveWords – “why don’t computers understand us (like when we type something)?”

The question reminds me of Larry Keeley of the Doblin Group pointing out that the average urinal is smarter than the average computer.  Larry wisecracks “At least the modern urinal recognizes when somebody is in front of it and knows when to flush.”  Not to be outdone by this analogy the brilliant students at MIT put the “‘Whee’ back into Pee” with their Urine Control Game.

The video on the ActiveWords website provides a good overview of what the product does. It is an interesting value proposition – gaining productivity 10 seconds at a time hundreds of times a day.  While I am always interested in capturing the stories of entrepreneurs, I wasn’t that interested in the product or the market opportunity until Buzz shared the potential for the product to be an advertising play. If that intrigues you, drop Buzz a line and have him share the potential of the next imminent version of their product.

As we wrapped up, a couple of colleagues of Buzz stopped by to go with him to the Seattle boat show.  I was introduced to Andy Ruff of locationlabs who was excited about their product for safely doing digital parenting with your smartphone. Then, David Geller of eyejot joined us.  He quickly demonstrated his product by sending me his video eyejot vcard to my email address. While we didn’t have much time, I was able to get the gist of the story behind the stories of the products Andy and David are developing.

A master strategic networker at work is Buzz Bruggeman.

Kevin O'Keefe

Since I was down in Pioneer Square for lunch, I arranged to meet Kevin O’Keefe to catch up on Lexblog and tour his new office space. The next two hours were spent marvelling at how quickly Kevin had implemented what he’d talked about two months previously at our first meeting. Now that I am blogging on a regular basis, I was even more interested in their secret sauce of helping legal professionals learn how to blog to drive business.  Lexblog now hosts 8000 lawyer bloggers in their content network.

Lexblog is a great example of Esther Dyson’s business principle of give away the idea and then make money servicing the idea.  Kevin is generating revenue getting lawyers to pay for what is essentially a free online service – blogging. The secret sauce is educating the lawyers on how to do business development through the content and network of relationships they create.

Kevin practices what he preaches with his own “Real Lawyers Have Blogs.” He gave me a quick overview of what they’d accomplished in the last couple of months which included LXBNLexmonitor and Lexconference.  Kevin was really excited about both showing off Lexconference at the upcoming LegalTech New York as well as interviewing a wide range of lawyers and vendors at the conference for his new service. Kevin paid me a nice compliment when he shared that I was one of the “rocket scientists” he had in mind when he penned his recent blog on retrieval tools for lawyers.

I really liked Kevin’s measures of success that he emphasizes for his clients and practices every minute:

  1. Grow your network
  2. Build relationships with your network participants
  3. Become known as a subject matter expert
  4. Create high quality clients and work

As we talked what really caught my attention was all the ways that Kevin is using Twitter and a wide range of Twitter tools to do business development.  He kindly showed me all the ways he was reaching out to the Amlaw 100 firms in New York to set up meetings for the coming week. I thought I was getting a handle on the power of Twitter for knowledge management and business development over the last couple of months.  What Kevin showed me made me realize that I have a lot of learning to do – and quickly.

The interaction and terms that Kevin used reminded of Evans and Wurster’s book Blown to Bits: How the New Economics of Information Transforms Strategy. I really liked their dimensions of Reach, Richness and Reciprocity.  To these terms I added the notions of Agency and Navigation from The Cluetrain Manifesto.

Reach is the penetration of channels and markets to the target consumer of your information.  Richness is the total information flow in all forms of digital media – text, audio, video, and mixed media.  Reciprocity is whether or not there is an exchange of value between the information generator and the information consumer.  Agency is the reputation of the information provider.  Navigation is how easy it is to move through the information flow. Lexblog is showing the way for how to innovatively manage these attributes to create valuable business relationships.

From Twitterific to the geographic search within Twitter (who is tweeting right now within one mile of the Seattle Mariners Safeco Field) to Muckrack (identifying journalists using Twitter) to tools for organizing the thousands of following and followers into manageable lists, a wealth of tools exist to generate relationships.

As we parted, I was a believer in the core of Lexblog’s philosophy of “good, timely content creates great relationships” more than ever.

There is just so much to learn to stay current on what is important to me.  Without a strategic network it would be impossible.

Posted in Intellectual Capital, organizing, Relationship Capital, social networking | 1 Comment

First, Second and Third Raters

Every startup expertise blog or book starts with telling you how important talent is in hiring and shaping the team that is going to drive the startup.  The authors assert that you should always hire “A players.”  How do you know when you are interviewing or listening to recommendations whether you have an “A player” or a “D player?”

With tongue only slightly planted in cheek, the following is a guide to what I call first raters, second raters, and third raters.  Enjoy the distinctions.  Hopefully you will add more of these distinctions through your comments.

  • A first rater always develops talent
    • A second rater micro manages talent
      • A third rater abuses talent by making them work on menial tasks for long hours (or does not recognize whether someone has talent or even understand that they should be looking for talent).
  • A first rater encourages active and confronting dialogue
    • A second rater tries to keep the peace or does nothing
      • A third rater is an abusive confronter and acts as dictator brooking no dissent.
  • A first rater actively seeks current reality– what is really happening now
    • A second rater hopes that things will turn out all right
      • A third rater continually changes the goals to reflect what they’ve just accomplished.
  • A first rater seeks first to understand, before trying to be understood
    • A second rater lectures and doesn’t listen, wanting to be understood and not caring about other points of view
      • A third rater doesn’t listen, rather they rant and dictate.
  • A first rater accepts responsibility when things go wrong
    • A second rater avoids responsibility and accountability
      • A third rater blames others.
  • A first rater is inclusive and uses “we” when things are going right and gives specific attribution to those who made the good thing happen
    • A second rater uses the royal “we” when things go right but makes sure everyone knows it was really them that made it happen
      • A third rater uses “I did it” when things go right, “You” when things go wrong and uses the word I almost exclusively for everything else.
  • A first rater has a vision and a passion for and a plan for getting to BHAGs
    • A second rater has a vision and a passion for how they will get promoted
      • A third rater figures out how to take credit for someone else’s leadership and results.
  • A first rater understands that leadership is always taken, never given
    • A second rater waits to lead until somebody gives them a title and positional authority
      • A third rater whines and rants and backstabs by letting everyone know that nobody understands that they are the real leader and the key to success.
  • A first rater uses the Outcome Frame (What are we trying to create?  How will we know we created it? …)
    • A second rater uses the Blame Frame (What is the problem? Who caused it? …)
      • A third rater accuses others of being unethical, liars and cheaters.
  • A first rater understands that it is results that matter not how hard somebody works
    • A second rater comes in early and stays late and makes sure everyone knows how hard they are working
      • A third rater requires others to always be present, work late, while he/she is out playing customer golf and drinking late into the night and calling it work.
  • A first rater respects others time, and plans carefully
    • A second rater creates lots of meetings with no agendas and lots of floundering
      • A third rater triple books themselves and leaves others to wait until she shows up and graces everyone with her presence.
  • A first rater eliminates and dissolves problems so that no one even knew there was a problem looming
  • A first rater adds creative energy to every environment they participate in
    • A second rater uses others energies to get ahead
      • A third rater sucks all energy from the environment.
  • A first rater understands the Theory of Constraints (from Eli Goldratt’s The Goal) and knows that in any system only a few work steps need to be managed
    • A second rater will try to optimize every single step in every process and therefore optimizes nothing
      • A third rater doesn’t even see that there is a system of work.
  • A first rater hires only Talent that has a passion for continuous personal development – life long learners who are also good at developing other people’s Talent
    • A second rater hires only “A” talent that are self-proclaimed experts in a narrow domain who are not interested in learning and assumes the best athletes will make the best team
      • A third rater hires only C, D or E talent.
  • A first rater under promises and over delivers
    • A second rater over promises and under delivers
      • A third rater promises whatever they think the boss wants to hear and never delivers.
  • A first rater treats everyone with extraordinary respect
    • A second rater shows respect only to those above them in the hierarchy or those they think can get them ahead
      • A third rater disdains everyone.
  • A first rater understands the value of strategic networking and gives to the network long before they need to extract value from the network
    • A second rater only works their operational network to get today’s task done
      • A third rater tries to use other people’s networks to get ahead.
  • A first rater understands the dynamics of value exchange relationships
    • A second rater tries to extract more value from the other party than is given in return
      • A third rater uses positional or monopoly power to extract unfair value which cannot sustain the other party.
  • A first rater provides feedback on things which need improvement in private and with frameworks which allow the other person to generalize and learn and develop
    • A second rater points out the problem in private but offers no guidance on how to improve
      • A third rater humiliates the “problem” person in very public settings.
  • A first rater generates plans and organizational structures which are sustainable without the leader present
    • A second rater generates plans and organizational structures which require the manager to always be present in order to be workable
      • A third rater generates plans and organizational structures which cannot work and leads to the firing of the subordinates for not getting work done.
  • A first rater shares all information and knowledge they possess to help develop others
    • A second rater expects others to keep information and knowledge so that they can go to the others on an interrupt basis when they need something
      • A third rater hoards all information and knowledge and requires everyone to grovel to get the information.
  • A first rater creates work environments that lead to sustainability for the planet
    • A second rater uses natural resources without any thought to their impact on sustainability and the planet
      • A third rater actively and intentionally pollutes.
  • A first rater understands that no human is exactly like him/her and that one needs to be flexible in dealing with talent (see David Keirsey Temperament Indicator)
    • A second rater expects everyone else to adjust to them
      • A third rater believes everyone else is just like him/her.
  • A first rater practices deep listening skills always
    • A second rater listens with “their motor running” just waiting for their turn to say something and does not pay attention to the other person.
      • A third rater reads email messages on their blackberry when someone else is talking.

My brother, who went to work for our “Australian brother” seeding 10,000 hectares of wheat in Kunnonoppin, West Australia, having never driven a monster tractor or 18 wheel truck before contributed the following:

  • A first rater jumps on a seeding tractor in West Australia with no instruction and goes “Good on ya mate” and proceeds to seed wheat for six weeks of 12 hour days
    • A second rater tries to get four days of instruction on how to run this big tractor and all these trucks that drive on the wrong side of the road and wants to wait for their commercial driver’s license
      • A third rater jumps in the tractor and promptly takes out several acres of fences and busts the augers.

Barney Barnett generalized the notion of first, second or third rater to:

  • A first rater is 10X.  They break the old command and control mold.  They are transformational.  They create brand new paradigms and enable others to do the same.
    • A second rater is able to understand and add a “factor”.  They have a step function increase in the ability to get leverage, develop people and ideas and move into a new paradigm (as compared to a third rater).
      • A third rater is a pre-Edison manager or leader.  They come from the old command and control model.  They need clear definition of what you are stating as the old paradigm to move away from.

David Socha asked himself “What is the essence of the first rater, second rater, or third rater?”

  • A first rater is generative
    • A second rater is self-centered
      • A third rater is a destroyer.

Additional entries from my strategic network of wonderfully creative colleagues:

  • A first rater does not draw attention to their first rater status
    • A second rater promotes their own perceived first rater status
      • A third rater sabotages others’ first rater status.
  • A first rater works proactively to create an environment where innovation and technical accomplishment are anticipated, appreciated and celebrated
    • A second rater markets pedestrian accomplishments as important achievements
      • A third rater identifies barriers and risks and claims that stopping work to avoid risk is an accomplishment.
  • A first rater hires exceptional people with exceptional capabilities and manages the differences that exceptional people exhibit
    • A second rater hires well balanced employees accepting a uniform average as a strong team
      • A third rater hires compliant employees and encourages them to conform to the pretense of excellence.
  • A first rater identifies and clears barriers before the team runs into them
    • A second rater clears barriers after the team runs into them and seeks recognition for heroic problem solving
      • A third rater identifies risks and uses them for excuses to move more slowly, cautiously or stopping altogether.
  • A first rater changes the rules to create an outcome that meets or exceeds organizational expectations
    • A second rater does heroics to win playing by the rules.
      • A third rater doesn’t win.
  • A first rater leads from the front like a seal team leader
    • A second rater manages from the back like an army general (Rear Echelon Mother F****er – REMF)
      • A third rater doesn’t lead at all.
  • A first rater shows up for important events and serves the team in time of crisis; they are part of the team
    • A second rater publicly recognizes important events and accomplishments
      • A third rater doesn’t know when significant accomplishments are claimed.
  • A first rater is focused on outcomes, not the tactics to accomplish them
    • A second rater focuses on the successful accomplishment of tactics
      • A third rater continuously replans to make the desired outcome match the accomplished tactics.
  • A first rater isn’t rewarded because there weren’t any heroics to be performed; their contribution isn’t recognized
    • A second rater receive bonuses and promotions because of their heroics
      • A third rater costs everyone else their bonuses.
  • A first rater may continue to try and change the rules, lead, and be a team player but may become frustrated and/or leave due to second-rater influence
    • A second rater will continue to hire other second-raters.  And so second-raters become the yeast in the organization that causes it to atrophy over time, killing innovation and accomplishment
      • A third rater is the only one left in a decaying organization.
  • A first rater makes the right thing happen at the right time
    • A second rater notices that something happened that will change their routine (I’m flexible so long as you don’t change anything)
      • A third rater wonders what just happened.
  • A first rater believes that the upside to listening is always greater than that of speaking and therefore typically listens to others and digests their thoughts before speaking themselves
    • A second rater gives the appearance of listening to others but is really just thinking about what they want to say
      • A third rater always wants to speak first so as to show everyone how smart they are.
  • A first rater implements processes and procedures where they believe it will facilitate achieving a business objective
    • A second rater implements processes and procedures because they think that implementing processes and procedures is the business objective
      • A third rater avoids implementing processes and procedures because they will possibly detract from the heroic effort which illustrates their individual value.
  • A first rater works with the personalities of their team (but challenges personal growth as well as professional growth)
    • A second rater does not address the difference in personalities that exist in any team
      • A third rater tries to force changes to the personalities on the team.
  • A first rater shows compassion for the person in all of the challenges that life brings each of us
    • A second rater ignores life outside of work
      • A third rater drives compassion out of the organization by driving work without regard to the person.
  • A first rater always reacts the same – giving the team a trusting environment in which to concentrate on themselves and their work
    • A second rater does not attempt to react consistently
      • A third rater lets emotions drive each response – keeping the team tense and on edge.
  • A first rater embraces change and encourages the team to take the opportunities that come with each change
    • A second rater ignores change and tries to keep the illusion that everything is the same
      • A third rater uses change to drive their own growth or to scare the team.
  • A first rater shows vision, inspires and leads their team towards worthy goals
    • A second rater manages a project by deliverable dates and resource tracking and scheduling
      • A third rater acts on whatever random “opportunities” come their way.

Can you determine whether the Dilbert “boss” is a first rater, second rater, or third rater?

Destructive Criticism - January 26, 2012

As part of Harvey Brightman’s Master Teaching class, he presents “compare and contrast” as demonstrating one of the higher forms of learning goals.  I found the comparing and contrasting of the first, second and third raters as a good fit for my learning style to better understand what a “first rater” should be about.

Adam Feuer, a recent addition to my visible university of colleagues, prefers another approach which is to create aspirational lists.  He did a wonderful job in translating the above “first, second and third raters” into an inspirational list of striving for greatness.

What would you add to this list?  What distinctions do you encounter that separate the first, second and third raters?

Posted in Humor, Learning, organizing, Relationship Capital, User Experience, Working in teams | 1 Comment

Attenex Patterns History – The Critical First Year

A successful product has many parents.  No one claims a failed product.

Attenex Patterns was both a successful AND an innovative visual analytics product. The seeds of the success occurred for six months before and after the founding of the company. The creation of the company and the creation of the product represent forty years of lessons and mentoring come to life.  This chapter in the Attenex history provides a context and a framework both for what went into the success, and a reflection on what we learned on the wild ride.

Marty Smith

Like all good companies, you try to shape and control your story. The public story of how Attenex came into being can be found in the AmLaw Technology article “Seattle Sleuth” published in the Winter of 2003. This article was aimed at promoting Preston Gates and Ellis (now K&L Gates) as much as it helped to promote Attenex.

In the early spring of 2000, Marty Smith (partner at Preston Gates & Ellis) was on his semi-annual visit to Pacific Northwest National Laboratory (PNL) in Richland, WA, which is operated as a part of Battelle. Marty was at PNL as part of his role with the Washington Software Alliance (now the Washington Technology Alliance).  The visits were set up as part of PNL’s efforts to make other Washington software industry professionals aware of their work so that they might make connections to their innovations which might then be commercialized.  As Marty sat through six hours of presentations by one group after another, he was enthralled with the SPIRE tool (now IN-SPIRE) and related projects.

SPIRE was a tool developed for the CIA, NSA, DIA, and FBI to analyze documents and display the documents in an abstract three dimensional space.  SPIRE worked on expensive SUN workstations and was essentially a single user system for individual analysts.  As Marty was watching the demo, he connected this potential solution to the explosion in costs for electronic discovery for litigation that Microsoft was encountering.  He asked whether the same tool could be used to process emails.  They said sure and showed him an example.

While Marty was not a litigator, he was on the Preston Gates and Ellis committee that managed the dealings with the Microsoft account and he was well aware of the demands from Bill Neukom (formerly Microsoft General Counsel) to help stop the exponential increase in the cost of electronic discovery.  Martha Dawson (Preston Partner and head of the Document Analysis and Technology Group – DATG) had done an excellent job in continually improving the process used for electronic discovery through creative processes like going to a contract pool of review attorneys instead of using expensive associates and partners to do a review.  Yet, everyone was realizing that continuous improvement does not begin to keep up with steeply rising costs due to the exponential rise in the volume of Microsoft email.

David McDonald

Full of excitement, Marty came back to the litigators (Martha Dawson and David McDonald) and shared his observations about this great tool at Battelle that could really help reduce the costs of electronic discovery.  When they asked how, Marty explained how the document analysis and visualization would make it much easier to see which documents were related to each other and then be able to quickly bull doze out the junk.  They stared back at him and basically dismissed that it would have any effect on their well honed process.  But Marty at least got them to agree to visit PNNL and view a demo of the technology.  A month later they did and while impressed with the eye candy, the litigators still didn’t believe that it would help them with their problem.

The Enemy - Floors of Bankers Boxes of Documents

Gerry Johnson

Knowing the extent of the problem at Microsoft, Marty knew he couldn’t give up and the firm couldn’t afford to lose the revenue if Microsoft decided to move reviews to another law firm or offshore.  In addition to his role in the Technology and Intellectual Property (TIP) practice at Preston Gates, Marty was also Chair of the Working Smarter committee.  This committee looked for ways that Preston could improve their bottom line through the application of technology.  This effort was a result of Gerry Johnson’s initiatives once he became Preston Gates Managing Partner to leave a legacy of innovation behind as his lasting contribution to the firm.  So out of this committee’s budget, Marty decided to buy a SUN workstation to test the SPIRE software with Martha Dawson and DATG (now K&L Gates e-DAT Group).

The SUN workstation arrived in August, 2000, and Martha agreed to assign one of her matter leads (Gregory Cody) to do a matter on the SUN that had been previously reviewed.   With synchronicity afoot, Marty and I met at a working dinner on Bainbridge Island for the BEST Foundation that our wives were officers of.  Marty was my contracts attorney when I was VP of Engineering at Aldus Corporation.

While standing around making small talk, I asked Marty what he was up to these days.  In his usual exuberance, he related that he was really excited about the technology projects he was overseeing.  “We are doing some really interesting work with information visualization, computational linguistics, natural language processing, knowledge management, data mining, and complex document assembly.”

I laughed and said “I didn’t know there was a single lawyer that could string those terms together, let alone have some understanding of what they mean.”  Having just left Primus Knowledge Solutions and in the process of forming my own consulting company, I said that I would be interested in coming by to see what they were up to and get pointers to the Battelle folks so I could go learn more about their tools.  We did the perfunctory, “sure let’s keep in touch” and said good night.

The next morning I got a call at 8am from Marty asking me to get my rear end into Preston Gates in Seattle as soon as I could.  He had described my skills to Gerry Johnson and told him he thought I would be just the right person to help Preston Gates evaluate the technology.  He also said that if the technology worked, Preston would be very interested in forming a company to bring the technology and solution to market.  So he wanted to make sure that I looked at the evaluation both from a technology standpoint and from a business formation standpoint.

Within a couple of weeks, Gregory Cody had managed to process a part of a recent matter that DATG had reviewed.  The preliminary statistics were amazing.  With a tool designed for something else, he was able to get two to three times the productivity versus the current way of looking at things one email at a time in Outlook.  He did not miss any documents that had been found with their current method, and he found several responsive documents that were missed by the linear review process.  Everyone was blown away at the implications and at the big jump in productivity.  With this first test and a tool not designed for this purpose, we’d gotten a 200-300% increase in productivity and better quality in an already innovative environment that had worked very hard to get 10 to 20% productivity improvements each year.

Now that we knew we were on to something the evaluation effort picked up a lot of steam.  We brought in four more lawyer reviewers and trained them on the technology and bought a few more SUN workstations.  This group went through five more matters of differing degrees of complexity to see if different reviewers on different matters could achieve the same kind of productivity and quality gains.  All of the tests were successful.

In parallel, we started negotiating with Battelle to get the changes that were needed to put the system into production and to figure out a business relationship for moving forward.  On the technology front, SPIRE needed a lot of work on the importing and exporting side to eliminate several manual steps that the Preston Gates IT people were having to go through.  One of the key issues was to develop a way to dedupe the materials being fed into the analysis engine to further reduce the amount of material that the attorneys would have to review.

If you think about the nature of email, there are a lot of duplicate emails within an organization.  When I send an email to somebody there is a copy in my outbox and a copy in your inbox.  If I send an email to several people, the duplicate email problem gets exponentially larger.

We quickly learned that Battelle was not able to move at the speed of development we needed.  We wanted to move at Dot Com speeds while they were used to moving at government speeds or “furlongs per fortnight.”  As a result, David McDonald finally got fed up and over a weekend did a visual basic program to dedupe Outlook/Exchange .PST files.  This program eventually became something called MiniMe and was put into production by Kim Church’s IT group within a few weeks.

I have to admit that I felt pretty embarrassed that a senior partner with Preston Gates would sit down and write a program to do the deduping.  As long as I have been away from coding, that option would never have occurred to me.  I marveled at David’s skills to be both a lawyer and a good technologist.  A few weeks later I found out a bit more about McDonald.  I knew from my previous interactions with him that he was a renowned intellectual property litigator and that he and his litigation partner, Karl Quackenbush, had been the litigators representing Microsoft in many of their high visibility IP cases.  What I didn’t know was that David was so bored while he was at Harvard Law School that during his second year of law school he went over to MIT and got a Masters in Computer Science.

While we were getting all of the good news from our testing of SPIRE, on the business front we were getting nowhere.  When we started we assumed that we would set up a joint venture with Battelle to pursue the commercialization of the technology.  They would do the product development and we would do the sales, marketing and support.  As time went by, they proved themselves to be both terrible and slow and not reliable at making their commitments to make the necessary changes to SPIRE that we needed.  When I did the code due diligence it became clear that SPIRE was a 10+ year old “spaghetti code” tool that would be very difficult to maintain and support.  They were rewriting the code for Windows NT (now called INSPIRE) which looked promising but it was still a single user version and we wanted to be able to have up to 50 attorneys working on the system simultaneously.

It became clear that they had no money to invest in the joint venture; all they could do would be to contribute their technology.  The joint venture would have to pay them to continue development of the technology and they would not give the joint venture any rights to the source code.  Then we found out that they had licensed the technology to two other spinouts (Cartia Themescape since bought by Aurigin and a biological systems visualization company OmniViz).  These spinouts did not have any market restrictions as to what markets they could supply the technology to, so we wouldn’t be able to get any kind of exclusive for the legal market.  In short, Battelle was not going to be a very good partner.

In parallel with these business activities I was doing a lot of research on what was publicly available on how to do information visualization and visual analytics.  I became convinced that it would be relatively easy to do the necessary document analysis and visualization from the ground up.  You needed a lot of computing horse power but the basic algorithms were published on how to get started.  Further, it was clear that you needed to take a database approach to the problem so that you could have multiple attorneys reviewing a matter at the same time.  I could not convince Battelle that this was a mandatory requirement.

The other fly in the ointment that we encountered was the business model that would let us make money in this environment.  We actually spent more time on figuring out a business model than on evaluating the technology.  When you have an application of technology that has 10X (10 times) levels of productivity improvements, one of the first things that you destroy is the business model of your customers.  Up to this point all of the eDiscovery reviews were done with hourly billing.  You counted the number of hours that the reviewers were working, multiplied by their hourly billing rate and sent the bill to the client.  Your profits end up being the delta between the billing rate and the labor rate with some overhead thrown in.  Yet with this technology even with only a 5X improvement in productivity, you would be cutting both your customers’ top line and bottom line by 80%.    We struggled for months to figure out how to solve this dilemma.  It is generally not considered good practice to destroy the business model of your customers.

In the end the answer was easy, but it sure took us a long time to see it because the “billable hour” is so wired into the legal profession.  We had to move to some form of fixed price model for billing.  We eventually hit on dollars per megabyte processed as the way to bill.  This also helped the customer budget because once they knew how much data they had in megabytes they would know what their bill was going to be.  So Preston looked at their historical billing rates in megabytes now instead of in hours, came up with their current billing rate and what their labor cost would be with the new technology at a 3X overall productivity factor, subtracted the two and then decided to split the difference with the client.  Thus, both parties won.  The customer got an immediate 30% discount on their billings and Preston got an equivalent uplift in their profits.  Further, Preston was now incented to be as efficient as possible in reviewing documents.  For every increase in productivity, their would be an equivalent gain in the percent profitability for the law firm.

The following diagram shows the progression of how the productivity evolved over time and how Attenex and our channel partners revenue increased:

Viability of Attenex Patterns

It was now January of 2001.  The next big issue was how to fund and staff what was to become Attenex.  From the beginning of my involvement, I had put forth a skeletal business plan that estimated that we would need $10 million in funding to go through the first several phases of product and market development.   This business plan also called for traditional Venture Capital funding.  The problem was the economy was tanking and the Dot Com Bust was occurring so Venture Capital was drying up for new ventures.  In late January though we had a big breakthrough.  In working through the business model, we realized that if we could get to even 5X productivity increase, then Preston Gates could afford to fund the company with the excess profits they would make from the DATG business by using our technology – assuming that they continued to increase the amount of electronic discovery business they could generate from their clients.  Thus, Preston would have solved both their client’s problem of reducing the cost of electronic discovery as well as creating a new company that could generate additional value over time by selling the products to other law firms.

From a staffing standpoint, we were at a fortunate time.  Lots of good software engineers were available as the Dot Com Bust occurred.  My first choice for the key software engineer and architect of the products was Dan Gallivan who was at Akamai.  I had talked to Dan over the preceding several months to check my assumptions on how easy or difficult it would be to develop something like the capabilities of SPIRE.  He thought it was doable but that it would be harder than I thought.  I kept feeding him articles that I was coming across and as he began to understand the problem he was coming to the same conclusion I was.  However, he was happy at Akamai.  Then I got a call one day that Dan had just found out that he and his team would be laid off soon and that they would really like to continue working together as a team.  He wanted to know if Preston was serious about forming a company, because if they were, he felt he could bring his core team over to the future Attenex.

I knew that Preston was serious as we had made several business plan presentations to the Executive Committee and to the partners and that they were close to making a decision.  All along I was very clear that I had no interest in taking an active management role with the company as I wanted to continue with my own consulting firm and continue teaching graduate school.  I also wanted no part of being in a venture funded company.  Then when it became a possibility for Preston to fund the company I reluctantly agreed to be the founding CEO until such time as we were big enough for me to go back to focusing full time on the product.

Pure Potential - First Office Space

Everything came together the last week of March, 2001.  The Preston Gates Executive Committee approved the business plan and the funding for the new company.  We filed the articles of incorporation and we made job offers to the team of seven from Akamai.  We located some space next to the DATG group on the 14th floor of the Bank of America building and we opened our doors on Monday, April 1, 2001.  We had no desks, no computers, no office supplies, nothing.  However, we did have a great starting team consisting of:

We had plenty of flip chart paper and pens and we started designing the business, the organization, and the products.  By the end of the week, we had computers for everyone along with the desks and basics to start actually producing something.  We also had a name – Newco Inc.  None of us could believe that this name wasn’t taken when we went to file, but there it was.  We grabbed it.  It would have to do for a while until we could get a marketing type in to figure out a name for what we were up to.

First Brainstorming Session (Kenji, Lynne, Skip) = First Helping of Candy

Formation of Newco – April 2001

While we were working the business plan side to form the company, DATG was putting the MiniMe deduping tool and the SPIRE application into production to do electronic discovery.  DATG was getting hit with an increasing volume of discovery and the combination of these two tools was helping stem the tide.  We knew it would take us awhile to have something that would go beyond SPIRE, so the first order of business was to clean up, make more robust and improve the performance of MiniMe.  So the engineering team replaced MiniMe with some serious Java code and within two months had the tool up and running and in production.  Our first product component, Redundancy Suppression Tool (RST) achieved our goal of outperforming MiniMe by a factor of 10.  Every one was quite impressed and quickly saw the benefit of hiring experienced software engineers.

Multi-colored Post It Notes Documenting the DATG Workflow

During this hectic start up period, we ran into our first set of intellectual property and ethics challenges.  During the due diligence evaluation of SPIRE and Battelle, I had looked cursorily at the SPIRE code and had the Battelle folks give me general descriptions of their approach to analyzing and visualizing document collections.  Our attorney friends made it clear that we could not take any chances in imparting any of that knowledge to the engineers.  So I had to stay clear of the design process for Patterns and we had to make sure that none of the engineers ever saw the SPIRE application being used by DATG.  All I could tell the team was that we knew that the visualization of documents would dramatically increase the productivity of doing electronic review and that we needed a system that could handle multiple reviewers working on the same matter at the same time.  Thus the team would need to take a databased approach rather than the sequential file, in memory system, that was the basis of SPIRE.  While everyone was frustrated with these arrangements, the setting up of the Chinese wall was a great insurance policy against any threat of Intellectual Property violations.

First Cluster . . . of Software Developers

By late May, the team developed their first prototype of our document visualization tool, code named Haystack, as in finding needles within a haystack.  Certainly, the quick development of RST had impressed everyone but there wasn’t much to see.  With Haystack we had our first visualization and UI prototype.  The following diagram illustrates what seems so crude now:

6/1/2001 - The Very First "Haystack" Prototype

The Preston partners closely involved with funding us were most impressed.  Yet, during the demo a light bulb started to go on for our Preston Gates funders: “If you could develop this so quickly, do we really have sustainable IP?  Can’t others develop the same thing as quickly as you just did?”  The following email captures the dialog with the Chairman of the Attenex Board, Gerry Johnson, explaining software innovation ups and downs:

—–Original Message—–
From: Skip Walter [mailto:skip@newco.prestongates.com]
Sent: Wednesday, May 23, 2001 11:08 AM
To: marty@prestongates.com; marthad@prestongates.com; davidm@prestongates.com; marywi@prestongates.com
Cc: Dan Gallivan; gerryj@prestongates.com
Subject: Haystack Visualization Demo

As part of my weekly meeting with Gerry I am going to show him a demo of the visualization prototype that we’ve been working on codenamed haystack.  The demo with Gerry is somewhere between 1 and 1:30pm.  If you get a chance, join us then or come down later this afternoon.  We’ll keep a copy of the demo available if today doesn’t work out so that we can show you later at your convenience.

The operative word here is that it is a prototype and is subject to all the unreliability aspects of early code.

We’re real happy with the way the architecture has turned out.  We’re disappointed with the visualization layer as the graphics package we used for prototyping (AWT) was pretty inappropriate for the task.  The other layers are working great and we can demonstrate the loading of Outlook/Exchange .PSTs into the SQL database, analyzing of those files, frequency calculations, clustering,  orienting, rendering the clusters to the screen, and then some level of manipulation.

7/10/01 The Proud Parents of the Early Version - Holly, Jim, Kenji, Eric, Lynne, Dan

We’ve learned a lot in an incredibly short time and it appears that most of the architectural decisions that the team made worked out well.  We’re pretty happy with the early results from the compute intensive tasks of analysis, clustering and orientation.

Now that we’ve got a baseline we can start the user testing for the UI.

We’ll also show a quick prototype that we’ve done in Excel that looks at the direct manipulation of the key concepts which is leading us to believe that a mixed mode interface of the detail text and the visualization of the whole may be a better way to go.

I am simply in awe of what the team has done in less than three weeks of working the problem.  As we shift from technology centric design and work with the HCD team we should get to a very usable tool very quickly.

Look forward to seeing you soon.

Skip

—–Original Message—–
From: Johnson, Gerry (SEA) [mailto:gerryj@prestongates.com]
Sent: Wednesday, May 23, 2001 2:18 PM
To: Skip Walter
Subject: RE: Haystack Visualization Demo

thanks for the show Skip – sorry for the dumb questions

—–Original Message—–
From: Skip Walter [mailto:skip@newco.prestongates.com]
Sent: Wednesday, May 23, 2001 2:28 PM
To: Johnson, Gerry (SEA)
Subject: RE: Haystack Visualization Demo

There are no dumb questions in this realm.  What was impressive is how quickly you and David saw the potential of the concept frequency map.  I expect David to jump on those concepts because of his closeness to the problem.  The treat is when you find something that is pretty quickly understood by those who don’t spend all day close to this kind of problem.  From a software development standpoint, that’s when you get really excited.  So to see you “get it” so quickly and then start to ask great questions about how and where it could be used was wonderful.

Thanks again for your trust and support.  We’re racing to get this stuff into Martha’s hands to start making a real difference.

—–Original Message—–
From: Skip Walter [mailto:skip@newco.prestongates.com]
Sent: Wednesday, May 23, 2001 5:51 PM
To: Johnson, Gerry (SEA)
Subject: RE: Haystack Visualization Demo

One of the questions you asked today was whether this was a hard problem that we are working on. I gave you a quick answer.  Let me give you a little more reflective answer.

My assumption as to the intent of the question is “if we can get this far in 2-3 weeks then will others be able to replicate what we are doing in a relatively short amount of time?”

My answer gets at the paradox of software development and in many ways the paradox of any creative insight.  Over the centuries many things have seemed impossible until someone has the creative moment when an “Ah hah” shows up.  Once they reduce the idea to practice and show it to others, everyone goes “of course, why did we think the problem was so hard.”

What previously took a lot of work now becomes relatively easy to copy.

On one level, visualization of the magnitude that we are talking about is a very tough problem to crack.  I’ve been interested in the technology and its applications for well over 30 years since I first became exposed to EKG and EEG processing on a PDP-12 minicomputer with a graphics display.  Lots of great researchers and minds have tried to figure out how to do interactive visualizations that actually have some real world payback.  Good results have been few and far between in the white collar or professional office productivity arena.

As you can see by the many layers that we had to implement to get to the first level of a visualization tool, this is a problem that is difficult in many areas:

    • How do you make meaningful semantics out of a single document and a document corpus?
    • How do you relate documents to one another?
    • How do you display the results in such a way that you can see the whole and view the details?
    • How do you manipulate the display to achieve some domain specific result?

Even these four things have not been possible on their own until very recently, let alone be able to work together to produce what is needed for visualization.  In many ways the problem is similar to what Peter Senge describes about the development of the commercial aviation industry in his book The Fifth Discipline:  The Art and Practice of the Learning Organization:

“The DC-3 brought together, for the first time, five critical component technologies that formed a successful ensemble.  They were:  the variable-pitch propeller, retractable landing gear, a type of light-weight molded body construction called “monocoque”, radial air-cooled engine, and wing flaps.  To succeed, the DC-3 needed all five; four were not enough.  One year earlier, the Boeing 247 was introduced with all of them except wing flaps.  Lacking wing flaps, Boeing’s engineers found that the plane was unstable on take-off and landing and had to downsize the engine.”

Necessary and Sufficient

There are two things at least that are the key to our being able to race as fast as we are:

    • We have a real problem that is suitable to the technology.  For this we owe Marty Smith for his insight of connecting visualization to electronic discovery when he went to visit Battelle a year ago.  Then Martha and David and their crew picked up on the insight to show that it really did work.
    • The research on the component technologies (document semantics, computable clustering algorithms, and rendering software/hardware) is developing just as we are getting the cheap computing cycles, large screen displays, fast networks, and very large storage devices.

Thirty years ago when I was working on this problem, we had 8,000 words of memory (versus 512 megabyte personal computers today) on a computer that was 1/10,000 the speed of today’s computers.  Even five years ago there wasn’t enough computing power on a supercomputer to do what we showed you today on our desktop.

So at one level, the problem is quite difficult when you look at all the things that had to be solved before we could even start our development activity.  On the other hand, now that they are basically solved and we show people our tool then other people can more easily replicate it.

The other thing that comes into play is whether other people will be motivated to copy what we are doing.  There are “products” in the world that for one reason or another people just don’t copy.  Disney has shown people how to build a successful theme park for over thirty years, yet no other theme parks come close to the Disney experience.  At a much smaller level, Primus Knowledge Solutions has shown how to build a successful knowledge management product but nobody has decided to go after that market yet.

On the other hand, disk drive manufacturers rapidly copy innovations and technology from other manufacturers (see Clayton Christensen’s The Innovator’s Dilemma).  I haven’t been able to find a pattern to this range of competitive motivation or lack thereof in my own research or other studies.

On a more myopic note, last September when I started on this project my initial assumption was that the visualization was a very hard problem.

Starlight Visualization

Largely because I’ve been interested in it for 30 years and I haven’t seen anyone be commercially successful.  Then we saw a couple of good research projects with our Battelle friends in SPIRE and Starlight.  At first blush, both looked worthy of being the result of very bright people working on the problem for a very long time.  But for different reasons in each case, it became clear that if you started today with the advances in the above component technologies and research, then the problem was a lot easier to solve.  It appeared that you could replicate what they were doing in a matter of months.

Four months ago when I first talked to Dan about the opportunity and the visualization module and how long he thought it would take him to develop a tool, his answer was several person years.  I kept after him and kept pointing him to research papers and nothing was denting his estimates.  Then, something clicked once he saw the quality of the linguistic analysis packages like those from Inxight.  He realized the problem was more solvable than he previously thought.  His estimates to get to a prototype came down to person months.  Then, once he started on it, he realized that between the algorithms described in the research papers, Inxight’s LinguistX, the capability of the Microsoft SQL database, and the experience of the different team members that the prototype could be done in person weeks.  Even though Dan is a very experienced computer architect and graphics expert it still took him over four months to see that it was a matter of integrating ensemble technologies rather than having to invent all of the pieces first.

Lastly, as I’ve tried to convey several times, getting to a prototype, and even getting the tool into production is lots less time consuming then all the things it takes to come up with a generalizable product.

Yet, what excites us is that we now have more than enough of a working architecture to start doing quality design and usability prototype iterations with the target users and be able to turn those prototypes around quickly.  We have something that we can credibly show to others (like KPMG) that is all ours.

What I can’t answer is what others will do once they see what we’ve done as we put the product to use in other law firms and clients.  It probably comes down to economics.  Nobody will do much until it appears that there is a $100 million market.

Which brings me to the last example, how we came to understand the economics and business model of Adobe and Photoshop.  When we did the merger between Aldus and Adobe, we couldn’t wait to get to the point in the process where both sides shared their detailed financials.  The jaw dropping surprise for us at Aldus was the size of the Photoshop revenue stream.  The most optimistic market analyst pegged the total size for photo editing software at that time (1993) at $15 million per year.

Given that Aldus had $5 million of that market with our product Photostyler we felt we were in pretty good competitive shape.  Imagine our competitive embarrassment when we found out that Photoshop revenues were greater than $150 million per year.  Nobody knew.  That allowed Adobe to keep the market for so long because nobody else thought it was a very big market.

At this point what I am trusting is that we are at the confluence of the deep expertise that Martha Dawson and her team have built, along with the deep expertise that Dan and his team have, along with the continued increase in computing performance from software and hardware advances.  By being first in this arena given the above combination, I believe that we will be OK.  Staying first requires us to execute a marketing and sales operation as well as we are executing the product development task along with having a comprehensive vision for where this stuff goes.

Also, the lesson that I’ve learned from the association with the Institute of Design is the importance of having not just one innovation but a system of innovations.  Somebody may copy one or two of the things you do, but they can’t begin to copy the system.  A key part of my deciding that Attenex was the place to invest my time and talent was the wealth of ideas that Preston Gates has as a result of the Working Smarter initiative, along with my working relationship with Dan Gallivan and our ability to quickly generate a system of innovations.

What you saw this afternoon represents a very small portion of what we are crafting for a suite of innovations and products.  In much the same way that Mitch Kapor saw that Visicalc needed graphics and text (Lotus 1-2-3) in order to become an order of magnitude larger business and then Microsoft trumped their efforts by combining Excel with Word and Powerpoint to create an even larger business, I believe that between Preston Gates and Attenex we can articulate and build our equivalent of Microsoft Office. We will not get stuck like the Visicalc folks in thinking that their innovation would last forever as a viable business.

The above are some extended thoughts.  They keep me optimistic that we are on the right track and can succeed, but they by no means keep me complacent.

Thanks again for coming down this afternoon and seeing a snapshot of the progress we are making.

From: Johnson, Gerry (SEA) [gerryj@prestongates.com]
Sent: Wednesday, May 23, 2001 8:27 PM
To: Skip Walter
Cc: +Executive Committee (FIRM); Smith, Martin F. (SEA)
Subject: RE: Haystack Visualization Demo

Skip – thanks very much for this thorough and thoughtful response.  In its face, a substantive response defeats me other than to say that I had some insight into your eventual answer here, but couldn’t be more pleased to have this complete an explanation.   Thanks again.  I’m sharing this with our management committee and Marty.  G

While the demo was very impressive, we still had a long way to go.

As mentioned above, most of our development work to date was from a technology centered design viewpoint.  In parallel, the human centered design team was observing Martha Dawson’s DATG group to understand the overall workflow as well as the work of the individual document reviewers.

The overall HCD process we followed is:

Human Centered Design Process

We rapidly iterated between the User Research and Prototyping phases.

The underlying graphics package we were trying to use did not scale nor did the algorithms that we were trying to employ.  We were struggling to get good performance on hundreds of documents while we knew that we had to display 10s of thousands to millions of documents.  So we switched from Java to C++ to get the best machine performance and decided to use OpenGL for our graphics standard so that we could do 2D and 3D information displays.  The following slides illustrate our progress in analyzing and displaying documents.

As part of our technology centered design focus, we believed that our ultimate visualization would need to be in 3 dimensions. We got very early indications that the lawyer reviewers were very uncomfortable with navigating and understanding a 3D abstract conceptual space.  However, as good technologists we figured that we could overcome this problem.  As it turns out, we never did.

We used a variety of tools to prototype 3D clustering.  The following screen shot looks at one of the prototypes we did in OpenGL to do quick interaction studies with potential users.  Just slightly more sophisticated than a paper prototype.

6/7/01 SeeMore Sketches

Now that we had some document processing going on, we could exploit different panes within the 3D interface to show different types of clusters in the overall document space along with a concept pane on the right and a mail message viewing pane on the bottom.  The core components of the user interface were starting to show up.

6/19/01 The First Real Clusters

With the basic components prototyped, it was time to experiment with large document collections, different styles of clustering, and what kinds of concept extraction we wanted.

7/13/01 Clusters, Concepts, and Document Text

Over the course of a month, we tried more than 20 different types of concept displays but none were really helping potential users make sense of the information displayed.

7/23/01 Kenjie's Prototype Pie Cluster Display

Now that we had basic capabilities it was time to experiment with what kinds of user interactions were required. We explored the different parts of the user interface to see which components should be actionable and what should happen when we clicked on a component.  Little did we know that for the next six years we would constantly have to tweak what it mean to do hit highlighting as we added or changed functionality. The more information you display, the clearer you have to be about what is activated.

7/24/01 Clusters with Click Document Highlighting

Bill Gates, Sr.

An important value that Preston Gates brought to the development process was to bring technology industry luminaries by to get demonstrations of what we were up to.  One of the most fun demos that we did was for William H. Gates, Sr (yes, that is Bill Gates dad and one of the named partners for Preston Gates & Ellis).  Gates, Sr. would usually come by Preston Gates in the summer to address the summer associates about his views of what it means to be a lawyer. For this summer visit, Gerry Johnson persuaded Gates, Sr to come by and see that a law firm could fund innovative software development.  Gerry also wanted to give Gates, Sr. visibility into how large the eDiscovery problem was growing for Microsoft since Preston Gates did most of Microsoft’s eDiscovery work.

We prepared more extensively than usual for this demo.  By the time Gates, Sr., arrived to see the demo he was clearly quite tired.  I was concerned that since we were running late we would put him to sleep in a darkened room.  So I shortened by introductory slides and got right to the demo.  We showed the current state of the demo:

7/25/01 Demo for William Gates, Senior

Just at the point that I thought I had put Gates to sleep, he straightened up and looked at me and said “So how many lawyers does it take to annotate a given document and the collection of documents with all those concepts?”

I replied “No lawyers at all.  Our content analytics software is able to figure out all the meaningful concepts to each collection of documents.  Everything you are seeing was done automatically.”

He looked at me again like I hadn’t understood the question, “No.  Really.  How many lawyers did it take to mark up these concepts?”

I repeated “None.”

Bill Gates, Sr., then turned to Gerry Johnson and said “Gerry.  Really.  How many lawyers does it take to identify these concepts?”

Gerry answered “None.”

As the implications of what we’d just demonstrated dawned on him, he asked “Has anybody demoed this to my son Bill, yet?”

Nervously, we all answered that we had not demoed it to anyone at Microsoft yet.

Gates then almost shouted “Well, will you quickly go over and demonstrate this to him so he’ll quit writing those stupid emails that get him in all that trouble with the Justice Department?”

After we stopped convulsing in laughter, we went on with the demo.  Clearly, he understood the implications.

Until this point, we used proximity as the orienting principle both for clusters in the larger space and for documents within a cluster.  Proximity implied that documents or clusters that were close together were more related than documents or clusters that were far apart. While proximity worked at the larger view, it did not work well at the cluster level.  Proximity gave some information, but didn’t really give you a sense of how the documents or clusters were related. This screen shot shows our first prototype for orienting documents within a cluster.

8/6/01 Orienting Documents within a Cluster

At the same time that we were working on clusters and orientation, we started working on both color and transparency.  Our users needed some way to distinguish the markings on a document (responsive, non-responsive, privileged …).  We wanted the transparency capability so we could show more information on smaller screens by having overlay areas (and it also looked cool).  We quickly found that we had to worry about human factors issues like color schemes for those with different forms of color blindness.

8/21/01 Need a little work on those colors HCD Folks

As we got the end of August, all of the many prototypes started coming together into a coherent whole.  We could content analyze the documents, store them in a database, display the documents in clusters and spines, color code the documents, show the key concepts, and display a document in a viewer.

8/30/01 Full Document Viewing - Docviewer Shows UP

Once we had gotten this far, I could finally see a 30 year dream come true.  I wrote this memo to Attenex employees and to our Preston Gates partners at the end of August, 2001.

Email Message from Skip to Attenex Staff:  August 31, 2001

In life there are little things and big things.  In the context of business, August 15, 2001, was a “big thing” day for me.

In 1968 I was fortunate to get a job in a psychophysiology research lab at Duke Medical Center at the start of my sophomore year in college.  We ran experiments on human subjects looking at their physiological responses to behavior modification therapies and to different psychiatric drugs.  To better deal with experimental control and real time data analysis of EEGs and EKGs, we purchased a Digital Equipment PDP-12 (the big green machine).  It had a mammoth 8000 bytes of memory and two pathetic tape drives that held 256,000 bytes of storage.

Embedded in the rack of the computer was a big green CRT which could display wave forms as well as text.  A simple teletype device served as the keyboard.  While we were controlling the experiments, we displayed in real time the wave forms from the physiological data of the human subjects.  We experimented with multi-dimensional displays of EKG vs EEG vs the user task analysis.  It was so fun to get lost in “data space.”  [A former HCDE student, Denise Bale calls this “dating her data”.]

Along with doing all the programming for the lab experiments, I got to use the machine to play my first computer game (Spacewar).  It was so cool being able to control a space ship in the solar system and have it affected by the gravity of the planets on the CRT.  There was no mouse at that time, but we used several potentiometers and toggle switches to control the X, Y and Z coordinates along with the firing of guns.  Controlling green phosphor objects was a real feat for those of us who have no hand eye coordination.

One semester while procrastinating in writing several term papers, I wrote a text formatting application called Text12 which was modeled on Text360 for the large IBM mainframes of the time.  The formatting commands were eerily familiar to the HTML format that we know today.  The results of the activity were that I could enter and edit the text of my papers and then print them out on a letter quality device.  It eliminated all the messiness of using a manual type writer and white out.   Several times at 2am in the morning I hallucinated about the combination of Spacewar, Complex Wave Form Pattern Detection and Text12 to provide the ability to take the electronic texts that I was creating, analyze them and display them in three dimensional spaces by the relatedness of the concepts within the papers.  I got carried away thinking of a new document being indexed and “blasting” links throughout the galaxy of documents.  I could almost feel the gravitational attraction of the important documents.

Over the next 10 years as computer processing power grew from the PDP-12 to the PDP-11 to the DEC VAX computers (wow 4 megabytes of virtual memory space for a program and 60 megabyte hard disks), I would periodically do a midnight coding project to try and bring my hallucinations from 1968 into reality.  Nice idea but there was never enough algorithms, CPU power, or memory.  And there were precious few electronic text sources available to actually index unless I wanted to type them in myself.

As I became a manager and began to acquire research budgets, I would squirrel away a little money each year to see if the technology was ready to tackle the vision.  The technology was never ready and there was relatively little research into the indexing and display of document collections until the early 1990s.  The other side of the coin was that there was no clear idea of the business value of such a tool.  We’d use these prototypes to try and impress internal funders to create some larger research projects.  But nobody ever funded us beyond the prototypes.

During this time I hooked up with Russ Ackoff of the Wharton School at the University of Pennsylvania.  One of the many “idealized designs” that he worked on was a distributed National Library System that he published a book about.  This design called for all the texts to be in electronic format and available for searching.  A key feature of the system was to generate “Invisible Universities”.  That is, using the reference lists of published papers and books, find out who references whom.  This system could then create influence diagrams of idea evolutions.  I was really hooked then on the possibilities.

One of the many reasons I joined Primus a couple of years ago was to bring this vision to reality using the Primus Knowledge Engine as a foundation.  We even licensed the Inxight ThingFinder software to help us do the indexing we needed to automatically author “solutions” for our knowledge base.  We got started but it became clear that we had no visualization talent within the engineering department and no clear idea of the business driver for such a technology.

Which brings us to Preston Gates and Ellis (now K&L Gates) and Attenex.  Thanks to Marty Smith who connected this semantic indexing and visualization with the electronic discovery problem we now had the baseline tool to see the dream come true.  Thanks to the efforts of Eric, last week we were able to connect the indexing capabilities of Microsoft tools so that we could inhale MS Office documents into the document analysis tool and generate concepts from Word, Powerpoint, Excel, HTML, and Adobe PDF documents.  Then, we were able to load an Attenex Patterns Document Mapper database with my research papers from the last several years about customer profiles, document visualization and knowledge management.

Then Kenji and Dan figured out how to cluster long documents and normalize the frequencies of the concepts.  And Lynne added the final layer of being able to add a document viewing window for the multiple formats along with cleaning up the interaction with the concept window panes on the right side of the Patterns display.

At 5PM yesterday, I saw my 30 year dream come alive.  I was able to display my research papers.  I navigated around the clusters and the concepts.  And then when I selected on a document, whether it was MS Word or a PDF, up it would pop in its own document viewer.  Unbelievable.  The only thing missing is the ability to index the books that I have in my home library.

But synchronicity strikes again.  Just this week, Amazon.com started selling electronic versions of the popular management texts that are a core part of my library.  They come in either Microsoft reader or Adobe eBook format.  I quickly bought ebooks in each of the formats to see if we could index them.  Of course they are protected from that.  So close, so far.  But then it occurs to me, books are intellectual property.  I bet that someone in the Intellectual Property Practice at K&L Gates was involved in negotiating the licenses for some of the book properties.  Sure enough several folks in the group were.  So hopefully the last step in the journey of the dream is close at hand, the ability to not only pour my own writings and email, research reports, and published papers into the Attenex Patterns document database, but we can also get full length books indexed.

Now I will be able to SEE the idea and concept relationships between all these wonderful publications that I can only fuzzily keep in my human memory today.  I can’t wait to glean new insights as I index more documents and as I use the re-cluster on anchor documents to see relationships I’ve never been able to see before.  I look forward to being able to publish meta-data about a corpus of documents and open up a whole new field of Document Mining.

As a researcher, teacher, and business person, yesterday was the happiest day of my professional life.  My heartfelt thanks to all of you who’ve helped bring these concepts to life.

Skip

Well, we were really cooking now.

In the last year (six months before company formation and six months after formation), we’d gone through the first three phases of the HCD process – user research, prototypes (paper, behavioral, and appearance) and value (monetization and supporting human values).  Now it was time to turn to the other 90% of software development – user experience and turning a prototype into a product.

Posted in Attenex, Attenex Patterns, Content with Context, eDiscovery, Human Centered Design, Idealized Design, Intellectual Capital, organizing | 19 Comments

What process should we use?

“The future is not a choice among alternative paths offered by the present, but a place that is created – created first in mind and will, created next in activity.  The future is not someplace we are going to, but one we are creating.  The paths to it are not found but made, and the activity of making them changes both the maker and the destination.”  – John Schaar

I am often asked by colleagues either to facilitate a process or to recommend a process that  they should use for some planning effort.  Over the last forty years, I’ve participated in many planning activities (most poorly facilitated with poor results) and facilitated many more.  The value of the result is directly proportional to the thoughtfulness in selecting the right process AND selecting and preparing the participants.  In recent years, very few management teams are willing to spend more than half a day in any kind of planning meeting.  So the selection of a process has to accommodate the time demands of the participants.

As I reflect on the hundreds of processes that I’ve learned or created for the needs of a particular group, there seems to be two primary forms.  One form relies on what Robert Fritz calls “structural tension.”  This form is most powerful when combined with Gregory Bateson’s “difference that makes a difference.”

    • What is the current reality?
    • What is the desired future state?
    • What are the differences between the two?
    • What is the difference that makes the biggest difference?

Once the important difference that makes a difference is identified, then that difference becomes the place to start for implementation.

The other primary form springs from the work of John Grinder who created what he describes as the Outcome Frame orientation.

    • What are we trying to create?
    • How will we know we created it?
    • What resources do we have to get started now?
    • What other opportunities does this lead to?

Every time I use the Outcome Frame process, I am amazed at the creative energy that is released in the group.

Grinder contrasted the Outcome Frame with the process that most of us had drilled into us in our schooling or business careers – the problem or blame frame:

    • What is the problem?
    • How did it get this way?
    • Who caused it?
    • What are you going to do to fix it?

When I have time to do team building, I generally have the group split into groups of four and give each team a problem to work through using the Blame Frame.  No matter how much time is given, none of the groups succeed.  After ten minutes, I then have the groups switch to the Outcome Frame to work on the opportunity.  The creative energy that is released is always exciting to witness.

When all is said and done the mark of a good process facilitator can be summed up in the following two states of mind:

  • People need what they need, not what we happen to be best at.
  • I unconditionally accept where you are, but respect you enough to help you reach your ideal.
Posted in Content with Context, Idealized Design, Learning, organizing, Teaching, Working in teams | 1 Comment

Heuristics for Building Great Products – Gordon Bell

One of the great entrepreneurs of the 20th Century died in 2011 – Ken Olsen who founded Digital Equipment Corporation (DEC).  For 23 years, Gordon Bell served as the Executive Vice President for Research and Development (both hardware and software) working closely with Ken Olsen to generate innovative hardware and software systems.  I had the privilege of learning from both men during the years at DEC when I was building ALL-IN-1.  One of the joys of being on email in the early 1980s was getting messages like the following from Gordon Bell.  It is a tribute to Gordon that most of these recommendations are as fresh today as they were thirty years ago.

INTEROFFICE MAIL TO KEN OLSEN FROM GORDON BELL
Dated Sunday 15 March 1981

Gordon Bell

Product goodness is somewhat like pornography, it can’t fully be described, but we’re told people know it when they see it. There are lots of heuristics in the book Computer Engineering. Since quality and competitive products must be our number one focus in these next generations, these heuristics are intended to help us. Only the four following need be attended to:

  • A responsible, productive and creative engineering group
  • Understanding the design constraints
  • Knowing when to create new direction, when to evolve, and when to break with the past
  • Ability to get the product built and sold

ENGINEERING GROUP
As a company whose management includes mostly engineers, we encourage engineering groups to form and design products. With this right of organizing, there are some responsibilities.

  • Understanding leadership who understands the product space and who has engineered successful products.
  • Having skills and disciplines required in the respective product area, eg: ergonometrics, acoustics, radiation, microprogramming, data bases, security, reliability.
  • Having skills on board to make the proposal so that we adhere to the cardinal rule of Digital, “He Who Proposes, Does”. Approving a plan, based on no implementers violates this.
  • Having openness, external reviews, clearly written descriptions of the product for inspection.
  • As a corollary of being prepared with leadership and skills, we occasionally enter very new areas, requiring research and advanced development; product commitment should not be made until fully operational breadboards exist.
  • As a corollary, start up groups with no previous or poor previous track record, may need review.

PRODUCT METRICS
Since most of our products are evolutionary, engineering is responsible for knowing their product area, in terms of:

  • Major competitor cost, performance and functions together with what they will introduce over the next 5 years.
  • Leading edge, innovative small company product introductions.

DESIGN CONSTRAINTS
Design constraints such as acoustics and radiation, are basically useful because they limit choice of often trivial design decisions. We should meet the following design constraints, and if unacceptable, go about an orderly change:

  • DEC Engineering practice for producability. These assimilate the critical external standards such as VDE, and FCC as rapidly as possible.
  • Information processing and communications standards, such as COBOL, Codasyl, IEEE 488 and EIA.
  • Information processing standards as determined by the key supplier, such as IBM SNA. For example, all eight versions of VISICALC we are implementing, should be compatible with external VISICALCs.
  • The architecture of existing DEC products. For example, future editors should be compatible with the past editors, unless it can be shown experimentally that there is a significant (x2) benefit to change. These include:
    • ISPs of the PDP-8s, several PDP 11 ‘s, VAX-11, 8048, 8080 and are likely to include a 16-bit micro.
    • Physical busses for interconnect. Fundamentally this insures that future products can evolve.
    • File, command language, human interface, calling sequence, screen/form management, keyboard, etc.
    • We must not be undone by historically poor standards which constrain us to poor products. Currently, the 19″ rack and the metal boxes we put in it, and then ship on pallets to our customers, act as constraints on building cost-effective PDP-11 Systems. The “mind-set” standard is impeding our ability to produce products that meet the 20% cost decline. A target should be the shipment of systems in cardboard boxes which the customer assembles.
  • Ability to be implemented easily in the natural language, given that we are selling products in all countries.

WHEN TO CREATE A NEW PRODUCT DIRECTION OR WHEN TO EVOLVE THE OLD

Given all the constraints, can we ever create a new product, or is everything just an evolutionary extension of the past? Also do we know or care where product ideas come from? There are a whole set of places to look for products, but that’s another set of heuristics, and the object of these heuristics is simplicity. The important aspect about product ideas is:

  • Ideas must exist to have products!

It is hard to determine whether something is an evolution or just an extension. If you look at our family tree of products, like the one for our computing systems, and which every product group should have and maintain, the critically successful products all occur the second time around. Some examples: 6, KA, Kl, KL, 2080; Tops 10, Tenex, 20; 5, 8, 8S, 8I/L, 8E/F/M; OS8-RT11; 11-20, 40, 34; RSX-A… M; TSS-8, RSTS; various versions of FORTRAN, COBOL and Basic all follow this; LA30, 36, 120; VT05, 50/52, 100; RK05, R101/2.

Some heuristics in designing good products:

  • All products whether they be revolutionary (we have yet to have any that are really in this category), or creating a new base, or evolutionary, should:
  • Offer at least a factor of two in terms of cost-effectiveness over a current product. If we build unique products that do not compete with ourselves, then we will have funds to build really good products.
  • Be based on an idea which will offer an attribute or set goals and constraints for VAX included factor of two algorithm encoding and also offering ability to write a single program in multiple languages. VT100 got distinction by going to 132 columns and doing smooth scrolling.
  • Build in generality and extensibility. We have not historically been sufficiently able to predict how applications will evolve, hence generality and extensibility allow us and our customers to deal with changing needs. We have built several dead end products with the intent of lower product cost, only to find that no one wants the particular collection of options. In reality, even the $200 calculators offer a familiarity of modular printer and mass storage options. For example, our 1-bit PDP-14 had no ability to do arithmetic or execute general purpose programs. As it began to be used, ad hoc extensions were installed to count, compare, etc. and it evolved into a digital computer.
  • Build complete systems, not piece parts. The total system is what the user sees. A word processing system for example includes: mass storage, keyboard, tube, modems, CPU, documentation including how to unpack it, the programs, table (if there is one, if not then the method of using at the customer table), and shipping boxes.
  • A new product base, such as a new ISP, physical interconnection specification, Operating System, approach to building Office Products must:
    • start a family tree for which we expect significant evolution to occur on, otherwise the investment for a point product is so short term and hence is likely to not pay off. In every case where we have successful evolutionary products, the successors are more successful than the first member of the family.
  • A product family can evolve several ways as described on page 10 of Computer Engineering. The evolutionary paths are lower cost and relatively constant performance, constant cost and higher performance, and higher cost and performance. In looking at our successful evolutions:
    • Lower cost products can’t get by without adding functionality too, as in the VT100.
    • Constant cost, higher performance products are likely to be most useful, as economics of use are already established and a more powerful system such as the LA 120 will allow more work to get done.
  • A product evolution is likely to need termination after successive implementations because new concepts in use have obsoleted its underlying structure. All structures decay with evolution and the trick is to know what the last member of a family is, such as the 132 column card and then not build it. This holds for physical components, processors, terminals, mass storage, operating systems, languages and applications. Some of the signs of product obsolescence:
    • It has been extended at least once and future extensions render it virtually unintelligible. (For example, PDP-8 memory addressing and ISP was extended three times.)
    • There are significantly better products available using another base.

SELLING AND BUILDING THE PRODUCT
Buy in of the product can come at any time. However, if all the other rules are adhered to, there is no guarantee that it will be promoted, or that customers will find out about it and
buy it. Some rules about selling it:

  • It has to be producible and work. This, although seemingly trivial rule is often overlooked when explaining why a product is good or not.
  • There should have been a business plan that several different marketing groups have contributed to in terms of ordering and selling. Just as it is unwise to depend on a single opinion in engineering for design and review, it is even more important that several different groups are intending to sell the product. Individual marketers are just as fallible as unchecked engineers.
  • Never build a product for a single customer, although a particular customer may be used as an archetype user. Predicating a product on a sale is the one sure way to fail!
  • It should be done in a timely fashion according to the committed schedule, at the committed price and with the committed functions.

Now isn’t it clear why building great products should be so easy?

Are there any heuristics that should be added? Or are patently wrong? Or need clarification?

Comments please!

The first paragraph with the 4 points says it all, but in case there’s need for detail, there are another 30 or so which follow. . . . in the words of Mies van der Rohe, “God is in the details.”

Posted in ALL-IN-1, Idealized Design, User Experience, WUKID | 3 Comments

Too Much to Know – The Death of the Long Form Book?

At dinner the other evening at Crush with my valued all things marketing and branding colleague, Katherine James Schuitemaker, I shared with her that I finally produced a draft of the book on Attenex Patterns I’ve wanted to write for a long time. She patiently listened without interrupting as I energetically talked about the topics and ideas I wanted to highlight.

When I finished and took a deep, expectant breath, I asked “so what do you think?”

Providing the gift that only long time colleagues have permission to do, she looked at me and then said “Skip, that is so old school.  You’ve waited so long to publish your first book that the world of book publishing has passed you by.  Toss the book idea out and start developing the iPad app that both of us really want.”

While this was not the comment or pat on the back I was looking for, I knew I was about to get something better.  So of course I had to ask “what do you think that app looks like?”

Katherine was at her most eloquent, software conceptualization best as she launched with the synthesis of threads we’ve talked about for twenty years since we first met at Aldus (now Adobe).  Energized, she leaned across the table and lamented “I am so tired of the linear book.  I am so tired of reading books and making notes in them that become completely inaccessible.  What I want is to have a tool that is the combination of the two tools we built at Attenex – Structure for authoring and Patterns for making sense of all the reference materials.”

“I want you to provide the same content that you were going to put in your book but now do it in app form.  But most importantly, I want that app to be the starting point of what I need.  I need to be able to put in a current project that I am working on and have your application point out the gaps between your framework and what I am doing.  I don’t want more information in the form of static content.  I want dynamic, connected knowledge that is ‘news I can use’ when I need it and in the context of what I need.”

“Skip, you have to go back to your original vision at Attenex of connecting authoring (Structure) with discovering (Patterns).  Stop with this book nonsense.  This is your legacy that only you can do.  The previous forty years are all prelude to preparing you for this killer app.”

Well, she had me know.  Legacy.  That was really unfair to entice me with the thought of producing a legacy.

While one part of my brain knew that she was on to something important, I couldn’t let go of the idea of writing a book now that I finally had the energy, motivation and stamina to do the writing.  With my high tolerance for ambiguity, I looked her straight in the eye “I’m going to be incongruent for a bit.  My gut tells me that you are right on.  Yet, my analytic brain is fighting your idea something fierce.  So I’m going to let my analytic self argue with you for a half hour and then I am going to agree with you and change course in some fundamental ways.”

Katherine was very patient with me for the next half hour as I served up objection after objection.  She did her best not to laugh as we’d played this game many times before.  Finally, as my “objection energy” ran out, I said “OK.  New game.  How do we marshal the resources to make it happen?”

As we parted, Katherine turned to me and commanded “Skip, free us from the tyranny of the linear book!”

My test for any good idea is how much energy I have for the idea when I get up the next morning.  Based on the frenetic writing that occurred over the next couple of days and the meetings I set up to corral the resources, this idea was clearly the right one.  I sent this email message to Katherine the next morning:

Katherine,

I don’t even know where to begin.

You have such a wonderful way of hearing, synthesizing, guiding, shaping and blowing my mind.

I knew there was a reason I’ve been procrastinating in writing the book.  The main reason I bought the iPad at launch was in Steve Jobs announcement he talked about the future of the iPad is combining video and books – the Vook.  That’s what I wanted to get experience with – to learn how to author a Vook or beyond.  I knew it couldn’t be linear, but the 60 years of reading linear books blinded me.  Yet, I’ve been disappointed in all the attempts that I’ve seen (the many Vooks I’ve bought), Flipboard, The Daily, and the Business Model Generation iPad app.

Using the iPad for several hours every day has transformed the way I work, play and think.  But not far enough.  Last night you moved the needle far beyond what I’m experiencing.

As a starting point, I’m attaching where I’ve gotten so far in authoring what I want to say about Patterns for the iPad.  After last night this is a start, but there needs to be so much more.

And my mind wouldn’t shut off last night.

In retrospect, the seeds of your insights last night were planted in this memo to the Attenex team.  While I kept coming back to these thoughts over the years, I clearly didn’t understand the implications of the last couple of paragraphs even though these ideas spawned the personal patterns work that Eric Robinson and I iterated through.  I was blinded by Patterns as a discovery and review tool.  Yet all the puzzle pieces are there in Patterns when we added meta-data tagging.

Email Message from Skip to Attenex Staff:  August 31, 2001

In life there are little things and big things.  In the context of business, August 15, 2001, was a “big thing” day for me.

In 1968 I was fortunate to get a job in a psychophysiology research lab at Duke Medical Center at the start of my sophomore year in college.  We ran experiments on human subjects looking at their physiological responses to behavior modification therapies and to different psychiatric drugs.  To better deal with experimental control and real time data analysis of EEGs and EKGs, we purchased a Digital Equipment PDP-12 (the big green machine).  It had a mammoth 8000 bytes of memory and two pathetic tape drives that held 256,000 bytes of storage.

Embedded in the rack of the computer was a big green CRT which could display wave forms as well as text.  A simple teletype device served as the keyboard.  While we were controlling the experiments, we displayed in real time the wave forms from the physiological data of the human subjects.  We experimented with multi-dimensional displays of EKG vs EEG vs the user task analysis.  It was so fun to get lost in “data space.”  [A former HCDE student, Denise Bale calls this “dating her data”.]

Along with doing all the programming for the lab experiments, I got to use the machine to play my first computer game (Spacewar).  It was so cool being able to control a space ship in the solar system and have it affected by the gravity of the planets on the CRT.  There was no mouse at that time, but we used several potentiometers and toggle switches to control the X, Y and Z coordinates along with the firing of guns.  Controlling green phosphor objects was a real feat for those of us who have no hand eye coordination.

One semester while procrastinating in writing several term papers, I wrote a text formatting application called Text12 which was modeled on Text360 for the large IBM mainframes of the time.  The formatting commands were eerily familiar to the HTML format that we know today.  The results of the activity were that I could enter and edit the text of my papers and then print them out on a letter quality device.  It eliminated all the messiness of using a manual type writer and white out.   Several times at 2am in the morning I hallucinated about the combination of Spacewar, Complex Wave Form Pattern Detection and Text12 to provide the ability to take the electronic texts that I was creating, analyze them and display them in three dimensional spaces by the relatedness of the concepts within the papers.  I got carried away thinking of a new document being indexed and “blasting” links throughout the galaxy of documents.  I could almost feel the gravitational attraction of the important documents.

Over the next 10 years as computer processing power grew from the PDP-12 to the PDP-11 to the DEC VAX computers (wow 4 megabytes of virtual memory space for a program and 60 megabyte hard disks), I would periodically do a midnight coding project to try and bring my hallucinations from 1968 into reality.  Nice idea but there was never enough algorithms, CPU power, or memory.  And there were precious few electronic text sources available to actually index unless I wanted to type them in myself.

As I became a manager and began to acquire research budgets, I would squirrel away a little money each year to see if the technology was ready to tackle the vision.  The technology was never ready and there was relatively little research into the indexing and display of document collections until the early 1990s.  The other side of the coin was that there was no clear idea of the business value of such a tool.  We’d use these prototypes to try and impress internal funders to create some larger research projects.  But nobody ever funded us beyond the prototypes.

During this time I hooked up with Russ Ackoff of the Wharton School at the University of Pennsylvania.  One of the many “idealized designs” that he worked on was a distributed National Library System that he published a book about.  This design called for all the texts to be in electronic format and available for searching.  A key feature of the system was to generate “Invisible Universities”.  That is, using the reference lists of published papers and books, find out who references whom.  This system could then create influence diagrams of idea evolutions.  I was really hooked then on the possibilities.

One of the many reasons I joined Primus a couple of years ago was to bring this vision to reality using the Primus Knowledge Engine as a foundation.  We even licensed the Inxight ThingFinder software to help us do the indexing we needed to automatically author “solutions” for our knowledge base.  We got started but it became clear that we had no visualization talent within the engineering department and no clear idea of the business driver for such a technology.

Which brings us to Preston Gates and Ellis (now K&L Gates) and Attenex.  Thanks to Marty Smith who connected this semantic indexing and visualization with the electronic discovery problem, we now had the baseline tool to see the dream come true.  Thanks to the efforts of Eric, last week we were able to connect the indexing capabilities of Microsoft tools so that we could inhale MS Office documents into the document analysis tool and generate concepts from Word, Powerpoint, Excel, HTML, and Adobe PDF documents.  Then, we were able to load an Attenex Patterns Document Mapper database with my research papers from the last several years about customer profiles, document visualization and knowledge management.

Then Kenji and Dan figured out how to cluster long documents and normalize the frequencies of the concepts.  And Lynne added the final layer of being able to add a document viewing window for the multiple formats along with cleaning up the interaction with the concept window panes on the right side of the Patterns display.

At 5PM yesterday, I saw my 30 year dream come alive.  I was able to display my research papers.  I navigated around the clusters and the concepts.  And then when I selected a document, whether it was MS Word or a PDF, up it would pop in its own document viewer.  Unbelievable.  The only thing missing is the ability to index the books that I have in my home library.

But synchronicity strikes again.  Just this week, Amazon.com started selling electronic versions of the popular management texts that are a core part of my library.  They come in either Microsoft reader or Adobe eBook format.  I quickly bought ebooks in each of the formats to see if we could index them.  Of course they are protected from that.  So close, so far.  But then it occurs to me, books are intellectual property.  I bet that someone in the Intellectual Property Practice at K&L Gates was involved in negotiating the licenses for some of the book properties.  Sure enough several folks in the group were.  So hopefully the last step in the journey of the dream is close at hand, the ability to not only pour my own writings and email, research reports, and published papers into the Attenex Patterns document database, but we can also get full length books indexed.

Now I will be able to SEE the idea and concept relationships between all these wonderful publications that I can only fuzzily keep in my human memory today.  I can’t wait to glean new insights as I index more documents and as I use the re-cluster on anchor documents to see relationships I’ve never been able to see before.  I look forward to being able to publish meta-data about a corpus of documents and open up a whole new field of Document Mining.

As a researcher, teacher, and business person, yesterday was the happiest day of my professional life.  My heartfelt thanks to all of you who’ve helped bring these concepts to life.

Katherine, you commented about authoring the Patterns story in an iPad version that the reader could add to in interesting ways, reminded me of this slide – my content, our content, their content.

In working with clients the last couple of weeks I’ve been adding to the above and then making it a mirror image with the author on one side and the reader on the other.

The author’s side goes something like this:

  • Collect
  • Annotate
  • Curate
  • Distribute
  • Engage – in the fullest sense of social media and Cluetrain Manifesto
  • Recycle

The reader’s side goes something like this:

  • Collect
  • Understand
  • Relate to current situation
  • Relate to other information and signals I’m getting
  • Engage
  • Act on the information

The Aggregage website looks at this phenomena from a marketing sense and then lays out a number of tools that help marketers cope with information overload.

Another term for this is transactive content:

The last step is monetization or what are the ways that you can monetize the content. While these methods are somewhat specific to social media they provide a good range of the monetization models open to you:

Top 10 Monetization Trends for Social Media and Microcommunities

“When it comes to savvy, proven, and incredibly successful tech investors, Ron Conway is a legend.  He has a gift or an uncanny sense of shrewdness, or a fusion of both, to identify the real opportunities that will transform into successful exits and also fuel and inspire aggressive innovation in the process.

“To help entrepreneurs, startups, and industry leaders capitalize on the tremendous opportunity that social media presents, Conway offers his vision for the top 10 ways to monetize real-time conversations:

  1. Acquiring followers
  2. Advertising – Context and display ads
  3. Syndication of new ads
  4. User-authentication; verifying accounts
  5. Commerce
  6. Payments
  7. Enterprise CRM
  8. Analytics; analyzing the data
  9. Coupons
  10. Lead generation”

Even the original vision for Attenex had the two pieces I felt were critical to a viable company – the authoring piece (Attenex Structure) and the reviewing/discovery piece (Attenex Patterns).  While they were sort of an integrated whole in my head, they always appeared as two separate pieces to everyone else.  Again, another thread of ideas dropped in the process of business focusing.  It really is the combo of Structure and Patterns and then going way beyond with what devices like the iPad allow for.

As a result, I never have seen Patterns as an authoring tool – it’s a discovery tool.  Even our core slide on what visual analytics means (courtesy of Sean McNee) has no authoring component to it.

So the Tom Sawyer part of me wants to kick this off by getting all of the participants in Attenex Patterns over the years to contribute to a Wiki like environment to begin the profiling.

  • Employees
  • Preston Gates Contributors – the board, Kim Church and her IT crew
  • Customers – Jones Day …
  • Channel Partners – FTI, SPI, Forensics Consulting, KPMG, Strategic Discovery
  • Competitors – Applied Discovery, Dolphinsearch, Stratify, Recommind …
  • Consultants – George Socha (EDRM), Geoff Bock, Patrick Inouye (Attenex patent attorney) …
  • Influencers – Monica Bay at Legal Technology News, Sedona Group, Judges (Schira Schindlin)

In an ideal world these folks would also become part of the startup community to enhance the personal patterns.

As part of joining the network, each person must profile himself/herself (setting up data for the template use or predictor for your stuff):

  • Meyers Briggs Score
  • Social Styles Score
  • Educational Background
  • Work experience (pointer to LinkedIn profile)
  • Number of years involved with eDiscovery
  • Role in relationship to Attenex Patterns
  • Authored Stuff
    • Favorite memory of Patterns
    • Tell me a story of your involvement
    • What role did the product play in your life
    • Key events in time
    • Artifacts
      • People photos
      • Screen shots
      • Important emails
      • Use Cases

The above information would be used to build out the content as well as create the fodder for the semantic networks, social networks, and event networks.  The meta tags for each of the authored content giblets and artifacts would place the content into the appropriate cluster or spine.  And unlike Patterns we would allow content to be in multiple clusters.

From our conversation last night, what really hit home is the comparison in eDiscovery between linear review and what we created with Attenex (conceptual review and now automated review – predictive coding).  I hadn’t made the leap from the linear book to the conceptual book or resource or dynamically mapped content or maybe the key term content in context.

And then the other things you suggested were to start the iPad App out with the story and then go through layers (overlaid by):

  • The” Make Sense of My Stuff” – ability to add your stuff to the core book (like Tableau Public)
    • For my own learning
    • To see patterns across lots of other authors’ work
    • To profile the project much like we are profiling the contributors

Now the next set of thinking is to follow through on the thread of how this transforms – reading and writing (in the fullest sense of multiple mixed media – text, photos, video, audio, simulations …), learning, and publishing.  It is learning for the Facebook generation.

This “content in context” meme is clearly in the air.  Both Amazon with their new Kindle Publishing format and Apple with their iPad iBooks textbooks announcement are creating more flexible formats for thinking outside the linear book.  Easy to use toolsets are emerging from Vook and Pugpig.

Amazon Children's book example

The Wall Street Journal weighed in with their “Blowing Up the Book” article on the new eBook formats.

The Novel remixed: Chopsticks Children's book

So Katherine, many thanks for knowing me better than I know myself, and pointing me in a new “legacy” direction.

Peace,

Skip


One of the problems with powerful ideas and paradigm shifts is once they get in your mind you now see the world through that lens. While meeting with Duke Professor Kate Hayles recently, she kindly shared several chapters of her new book How We Think: Digital Media and Contemporary Technogenesis. As I flew back to Seattle I immersed myself in these Adobe PDF chapters on my iPad. I loved the insights and implications of what she was describing for the new forms of literature in digital media.  I particularly liked her pointers to  the Digital Humanities Manifesto 2.0 to describe the first two waves of the new field:

“Like all media revolutions, the first wave of the digital revolution looked backward as it moved forward. Just as early codices mirrored oratorical practices, print initially mirrored the practices of high medieval manuscript culture, and film mirrored the techniques of theater, the digital first wave replicated the world of scholarly communications that print gradually codified over the course of five centuries: a world where textuality was primary and visuality and sound were secondary (and subordinated to text), even as it vastly accelerated the search and retrieval of documents, enhanced access, and altered mental habits. Now it must shape a future in which the medium‐specific features of digital technologies become its core and in which print is absorbed into new hybrid modes of communication.

“The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative in character. It harnesses digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation. Such a crudely drawn dichotomy does not exclude the emotional, even sublime potentiality of the quantitative any more than it excludes embeddings of quantitative analysis within qualitative frameworks. Rather it imagines new couplings and scalings that are facilitated both by new models of research practice and by the availability of new tools and technologies.”

Yet, it was with great pain that I read Hayles in the “old school” long form book in the new digital iPad medium.  Every paragraph pointed to interesting sounding research.  Now I was going to have to wait until I was back at my desk to chase all these links to go to the source material.  Along with what Kate was describing I really wanted a new form of Attenex Patterns so I could view these documents and concepts in relationship to each other – through semantic links, social network links and event networks.

Social Network and Semantic Network Views in Attenex Patterns

I just wanted to scream as I realized the “purgatory” I am now in between the old school and the awaited next generation digital media.

As I finished up with Kate’s chapters, I turned my attention to David Weinberger’s latest book Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts are Everywhere, and the Smartest Person in the Room is the Room. Imagine my continued pain when I came across this passage from Weinberger:

“I am aware that it is at best ironic, and at worst hypocritical, that I have written a long-form book, available only on paper (or on paper’s disconnected electronic simulacrum), that is arguing for the strengths of networks over books. My apology is of the unfortunate sort that does not justify the action so much as humiliate the perpetrator. And so: I am sixty years old as I write this, and am of a generation that takes the publication of a book as an achievement—my parents would have been proud. It’s also not irrelevant to me that book publishers still pay advances. Beyond these primordial and pathetic motivations—seeking money and Mommy’s approval—there are some other factors that mitigate the irony. I’m not saying “Books bad. Net good.” The privilege of holding the floor for the length of 70,000 words can allow ideas to develop in useful ways; if this book spends more time discussing networks than books, it’s because its author assumes that the case for books is made implicitly by every schoolroom with bookshelves, every paragraph of flap copy, and every public library. Further, for the past fifteen years I’ve been working in a hybrid mode that is not inappropriate to the transformation we’re living through: I have been out on the Web with the ideas in this book since before the book was conceived, and have profited greatly from the online conversations about them. (Thank you blogosphere! Thank you commenters!) Still, not only is the irony/hypocrisy of this book inescapable, it is so familiar in this time of transition that I wish someone would write a boilerplate paragraph that all authors of nonpessimistic books about the Internet could just insert and be done with.”

While I continue to enjoy Weinberger’s long form book and the transform he enables of my understanding of knowledge, the echo of Katherine James Schuitemaker’s plea reverberates in my mind:

“Skip, free us from the tyranny of the linear book!”

It is long past time to go build some innovative software once again – my life pattern that repeats.

Posted in Content with Context, ebook, Human Centered Design, Intellectual Capital, Knowledge Management, Learning, social networking, WUKID | 13 Comments