In one of our many discussions on Content Management that continue since the early 1980s, Geoff Bock introduced me to the notion of “Transactive Content.” He had come across the term in a Gilbane report which referenced work done at Forrester.
“Forrester was one of the first of the major market research firms to target the
Internet and web applications, particularly for e-commerce. Their concept of
“transactive content” has been influential in helping drive home the critical role
of content in commerce. There is now quite a collection of terms used by other
analysts that are similar (processable content, dynamic content, active content,
actionable content, transactional content, etc.).”
Forrester (2000) defined Transactive Content as “software that blends transactions with interactivity and content over the net.” Further, they go on to say:
“Internet commerce struggles to fulfill its promise. The reason? Web technology cannot deliver a seamless self-service experience. This report concludes that a new type of application – Transactive Content – will redefine Internet self-service. The key: support for a fluid commerce experience, from gathering information to executing transactions.
“Two mismatches undercut today’s efforts to fulfill the self-serve imperative:
- Current Web technology is too thin. The Web delivers information to and from anywhere, but it cannot handle give-and-take conversations such as on-line commerce
- Web-enabled business apps are only a partial solution. Slapping browser front-ends on internal transaction systems — like order entry or inventory — only gives customers access to “commitments.” This is not enough. Questions, answers, and decisions are not supported.
“Forrester believes that a more powerful model — Internet Computing — will subsume the Web by 2000 and lay the groundwork for high-grade self-service. Internet Computing sets up rich discussions between firms and Web visitors with:
- Two-way conversations. With live software easily delivered to the client using technology like Java and Dynamic HTML, sophisticated interactions can happen without prior setup. This approach produces intelligent clients that do real work — for example, crunching data or rendering multimedia — which powers compelling commerce.
- True sessions. Internet Computing lets the give-and-take between Net clients and servers flow coherently across time and multiple systems. This continuity underlies the self-service imperative.
“Your competitors are 18 to 24 months away from delivering advanced Transactive Content. To get there ahead of them:
- Ask Mom to buy something at your site. The first step on the path to Transactive Content is to shed the evolutionary mindset that dominates most on-line commerce thinking.
- Make friends with the right people. Get started building affiliations across the Net that will address the whole experience your customers need. Set a new breed of business development managers loose on-line, building relationships with complementary sites — connections that are much deeper than Web links. Move into what Forrester calls Syndicated Selling — embedding your content and transaction links in partner sites.
- Be gutsy about technical innovation. Transactive Content front-runners will blast away at two technical initiatives: 1) today’s production Web offering, and 2) a Transactive Content lab. Co-locate these groups to breed interaction that will incrementally enrich the current offering with new technologies – Java, Dynamic HTML, and emerging component platforms – and connect the “TC” dreamers to the rigors of quality and deployment. “
To the above list we would now add social media and the world of smartphone and iPad applications.
In 2005, my colleague Eric Robinson showed me a slide show of digital camera pictures of a trip he took on his motorcycle between Seattle, WA and Reno, NV. As he clicked to a beautiful snow covered rock formation, I asked “where was that? I’d love to go visit that area?”
Sheepishly, he answered “I don’t know. I know it was somewhere in the Cascades but there aren’t any clues in the photo that helps me remember where I was.” As a typical software architect, he bemoaned “I sure wish they made digital cameras with a GPS device in them so that you would always know when and where you took a photo.”
“What a great idea,” I responded. Then it occurred to me that somebody must have already had that idea. So we immediately called up Google and searched for “camera and GPS”. To our amazement a range of responses came up that included GPS camera phones and software that could combine data from a GPS device and from a digital camera . We looked at the sample website and found an amazing set of automatically generated context. Given that I had a digital camera and a GPS device, I immediately jumped in my car and took photos on the way to the ferry and then over to Seattle. An overview satellite photo comes up with labels for where the photos were taken. You can click on the positional label or on one of the thumbnails on the left side of the web page. When the specific photo comes up you see the photo, the satellite photo of the surrounding land mass, and then pointers to MapQuest (for driving directions), TopoZone (for the topographic information), and then the structured information (latitude and longitude, elevation, camera make and model, photographic settings).
With the recent advances in camera phones, you can now have voice combined with the camera and GPS information. The iPhone 4s combines all of the above features plus it can let you know who you might have been talking to while you were taking your photos. Without having to wait until you get back to your PC your photos are automatically uploaded to the iCloud and have friends and families seeing the content in context immediately. I still am amazed (I amaze easily) when I take photos with my iPhone and then see them magically appear on my iPad. With apps like 360 Panorama and Photosynth you can go beyond just simple photos and videos that are geotagged but all the way to 360 degree panoramic views. With the latest Dot panoramic lens you can go even further.
In business, we have the same needs placing the content that comes our way on the flood tide each day in a larger, more organized context. An email arrives from Eric, what is the context of this email? Which project does it belong to? What is the social network of people and organizations that are associated with this project? What is the event time line for this email – is it leading up to a particular deliverable on a particular date (the two dimensions of time)? What is the semantic network of other documents that are closely related to this message? What financial transactions in the form of budgets and actuals should be linked to this item? Is this message a part of a sales activity or intellectual capital that could be patented? Each of these questions leads to a collection of potential contexts for a message much in the same way that the act of simply taking a digital photo could automatically generate additional context that is a part of the process of creating a travelogue that is immediately shareable.
With Attenex Patterns (acquired by FTI Consulting and a part of Ringtail) we had many of the pieces of Transactive Content, but we did not have the transaction component of eCommerce. As a result we set up a prototyping effort for a project code named Quicksilver to explore all aspects of Transactive Content. In parallel with starting the prototype, I reflected how we had gotten to our current understanding.
In the beginning was Office Automation (OA, circa 1980). With much hand wringing about how people couldn’t type, the worry of OA was that we would turn every office worker into a secretary. Here we are 26 years later and for the most part there are no more secretaries and we all know how to type. We are much more efficient and probably more effective with the current tools than we ever were with paper and secretaries. In the process we also became our own travel agents and graphic designers (well OK, Powerpoint slide generators).
Now we are pretty much in an age where every knowledge worker is a content generator every single day. We turn out an incredible amount of stuff, as Attenex sees everyday when it comes to electronic discovery. In the business world the content that we generate is still primarily text and numbers. But as Stan Davis in the Art of Business pointed out – we are text and numbers in business, but with the advent of the iPod, iPhone, iPad and the digital camera we are sound and pictures at home. His prophecy is that business will soon be flooded with all four forms of content – text, numbers, sounds, pictures.
Then along came an incredible sea change as the result of the big fraud cases and homeland security and litigation issues like the Zubalake case ($1 million judgement, $30 million sanction for electronic discovery fraud) and the Morgan Stanley/Coleman $1.5Billion sanction for not keeping and being able to produce relevant emails. In attending the seven conferences on these topics in 2005, it was clear that business was in the process of making every employee a professional records manager. And along the way they are adding the burden of becoming compliance managers and regulatory compliance managers and Sarbanes Oxley policy wonks.
An interesting example of how bad it has gotten comes from Kevin Esposito formerly at Pfizer. From his role in the law department it became clear that Pfizer needed to dramatically update their records management policies and start adhering to them. This was going to cost 10s of millions of dollars and take several years. Yet he got the program sold and approved with 1 slide. He found a slide that had been used at a manufacturing plant manager’s meeting the week before that described the water quality results from the plant for the previous week. He put the slide up with it’s five bullets. He then went bullet by bullet and pointed out which regulatory agency required the information to be produced AND retained. The shortest time that the information needed to be retained was for 2 years. The longest time was seven years for one of the bullets.
“Let me be very clear. Not just this information needs to be retained, but this slide needs to be retained for the longest time period of the regulations,” Kevin pointed out. For each document like this that we don’t retain appropriately we are liable for sanctions ranging from $10,000 to millions of dollars.” Further he elaborated: “And notice that this document has nothing to do with our core business which is producing pharmaceuticals. Imagine how much worse those records retentions policies are.” Needless to say after everyone stopped swearing and fainting, they had everyone’s attention and the records management initiative was approved. This was in 2003. Regulations, compliance and the high stakes of litigation have made the problem much worse since then.
The above is all about the negative side of information and records retention (although unfortunately it is what gets everyone’s attention). What business really needs is for each knowledge worker to realize that they are incredible Intellectual Capital generators – human capital, structural capital, and relationship capital (customer capital). Yet, there are no tools for personal knowledge creation and management, let alone trying to do Intellectual Capital Management and Accounting at the Enterprise level (even though as Tom Stewart et al have pointed out this is the biggest unaccounted for part of every company’s financial accounting gap).
So the challenge is how do we take the “stick” approach and turn it into a carrot opportunity. Today, the simplistic response to this challenge is to do “search” better. Ideally, this means to have the tools to “unite” the unstructured document pools that I want to explore – the complete deep internet, the whole enterprise, my personal document universe (email and my hard drive). But nowhere is the context of the search kept, which at its fundamental level is the business process that triggered the search in the first place. As Autonomy talks about in their recent literature, if you have to do a search it is a sign that your application has failed. It is the formal and informal work processes that contain the context of why a search was needed. I believe that this is the next big area with companies like Genentech which is to take the documents that already exist that have an incredible amount of latent structure.
The placeholder notion that I’m using for this arena is Transactive Content. The term comes from a Forrester Research initiative in the late 90s to describe the advent of XML processes but seems to have been lost.
In 2005, Enrique Godreau one of our board members at Attenex challenged me to describe our prototype Quicksilver in terms that simple people could understand. The following is a result of that homework:
With a little bit of time to reflect and think about how I would talk about Quicksilver the almost one sentence description would come out something like this:
Quicksilver is a way to quickly See What Matters at the personal, departmental and enterprise levels. It allows me to visually recover things I know are in my document/data pool but I can’t quite remember how to get at them exactly. It enables me to discover patterns that I didn’t know were there that matter to me in the moment. It automatically provides ways in which I can make my key intellectual assets more findable. Depending on the workflow, the user can also recombine the ideas that are found into a virtual document that more closely matches their intent.
The following are representative stories or use cases to illustrate the above.
What is the cost of recovery?
Marty Smith is a senior partner and transactional attorney, formerly at K&L Gates. He took on the most important and complex contracting tasks for companies like Microsoft. As he negotiates clause by clause in these complex contracts he often has to go find similar clauses in contracts that he has constructed and then modified over the past 25 years. During a user research session on a “live” contract negotiation, we watched him spend over 30 minutes trying to find examples of ways in which he had modified a particular clause. He knew that he had done it about 30 times in the past but couldn’t remember for which clients and which contracts. He finally gave up and had to craft his changes from scratch without the benefit of his previous work. With the Quicksilver Attenuated Search capability he would have found the documents which contain the clause within 30 seconds. The cost to the client from lost productivity >$500. The cost from not doing the best work – unknown. This happens several times a week for each transactional attorney.
In the past, most corporations have simply recycled the computers of employees who have left the firm. Now firms are realizing the lost intellectual capital and risks associated with simply deleting the work of a former employee. Many companies and law firms could use Quicksilver to quickly search and organize a former employees digital assets and place those assets into a “corporate memory”. The intellectual assets are both the documents that somebody had crafted as well as the people relationships developed.
What is the benefit of discovery?
A globally known textile manufacturer suspected that two of its sales people were committing fraud. Law firms and accounting firms estimated that it would cost $50,000 to $100,000 and take 1-3 months to examine the 2 GB of email from the two sales people. By using an early single user prototype of Quicksilver, the Sales Manager was able to examine the emails and in one hour (which included training) found over $1 million of fraudulent transactions. Investigations of suspected employees can become routine by the appropriate manager.
The pharmaceutical industry has coined the term “freedom to operate” to describe a process need to identify early on whether the drug they are thinking of developing is already being worked on or has patent problems associated with it. Today, they often spend $100 million or more to discover and develop a new drug only to find out after releasing it that others hold patents on that drug. With Quicksilver at any stage in the development process, the drug researcher can combine searches of their own research, the patent database, product announcement databases and the medical research literature to identify problems in the development of new drugs.
Anti-money laundering software is difficult to develop and generates thousands of alarms to a compliance manager. Today’s systems just look at the transaction flows going through a financial institution. What the financial company wants is a way to tie the transaction flow into the CRM system and into the emails of the high net worth customer managers. Quicksilver is the only tool that can provide analytics into each of these different pools of data and then visualize the results so that the number of alarms can be dramatically reduced.
In each of the cases mentioned, the combination of Quicksilver’s automatic indexing and easy access to many sources of unstructured, semi-structured and structured information combined with visualization capabilities allows a researcher or investigator to quickly see what matters without having to wade through, correlate and laboriously analyze lists of results from traditional search engines.
The above are just a few of the stories and the use cases that we’ve identified with the key aspects of Quicksilver. The overarching goal is to move into an area I am calling transactive content. That is, content that is retrieved, analyzed, and used as part of an overall goal directed business process. Search engines today are disconnected from the goals and processes of anyone doing research or investigations, whether as a knowledge worker inside an enterprise or a consumer. The ability for an individual to do this research on their own materials independent of a company having to make a global enterprise purchase is key to personal productivity. Attenex Patterns requires an enterprise purchase and IT staffing. Quicksilver would be as easy to use and install as Google desktop but far more powerful.
An important part of Transactive Content is making sense of citations. Citations come in many forms – URL links from one web page to another, case law citations in formal legal briefs, references in journal articles, forward and backward prior art references in a patent, and pointers to people to talk to in informal email sessions. The importance of citations even emerges from the “Workflow as a Pi Process” discussion that the goal of email is to create a contact.
I’ve been enamored with citations from my first introduction by Russ Ackoff to the notion of invisible universities to what Chaomei Chen has been doing with the visualizations of citation references in formal journals. Clearly Google has made a fortune out of a very simple citation link – page rank and then connecting that to the powerful set of citations called WordSense and Adsense. Facebook is doing a similar thing with its EdgeRank. Each citation link has an enormous amount of information buried behind it, but I have to lose context to go chase that link. This is particularly painful when I’m reading a paper book or business article where there is no easy link to the electronic information. The hardest part is that I don’t get to see the author’s whole product – their content plus all the content they’ve drawn from to create their content. In addition, I also want to see who has been referencing this content. In the legal field, this is a core part of the value that Lexis and Westlaw provide – what cases does this case reference and who is referencing this case.
The Amazon Kindle with the annotation highlighting option provides an interesting public sharing of highlights. As you read a book, sections where 5 or more people have highlighted the text show up as a public highlight. However, what I would really like is some way to connect with the knowledge workers who are highlighting the same things I am.
I’ve felt that there has been something missing to jump to the next step which is to have some associated content to place the citation in some form of abstract concept space. I didn’t know how to get there without having the content for every citation as well as the content of the document which contained the citation. Assuming that I now have some content/context for the citation I now have fodder to do some interesting joining.
Let’s say you wanted to do a formal document like a contract or a brief or even a white paper. The author instead of having to author everything as is the case today could instead write basically an outline of statements about what they want to put together. The tool then looks at the statements and compares them to its database of statements to find the best match. A good match would be a document that included all the statements, but most likely you would match against bits and pieces of existing documents. The tool would then bring back the piece parts and the user would select the document sub pieces that best match their intent. The really good news about this approach versus what we were trying to do with our Attenex Structure product is that there is no knowledge to maintain or create beyond the original documents with their citations. 95% of the cost of knowledge management systems goes away and yet you get over 90% of the benefit without having to go through the pain of generalizing the knowledge that sits in a specific memo like this piece of paper.
If this core citation stuff works, then I think we have the whole next level of PageRank and EdgeRank and can generate Transactive Content processes rather than having to always author them through very specific, very brittle and very narrow in scope workflows.
Where I’m trying to go with this is to define transactive content in order to get into the Intellectual Capital management business by discovering what is already there (see what matters). All of these thoughts are about putting content in context.