Covid Deaths in U.S.: 1,123,466 Get Vaccinated!Stop the War in Ukraine!
We finally had a sun break during on of our many winter rivers of rain. I put on my boots as there will be lots of puddles and little temporary streams. I get my ski poles to do my Nordic walking.
I look forward to seeing how the gnome homes have survived the snow and rain of the last month.
I stop to check out the ever changing adorning of the homes. I am delighted to see the sun shining on home #11.
I also spot a new fungus growing on a tree that I missed over the past couple of months.
With spirit refreshed after my jaunt in the woods, I head back home.
One of the joys of where we live is turning a corner and noticing a new view of the Puget Sound. It might be just a change in light or a view I had not paid attention to before. It is clear enough today that I spot a framed view of Seattle and the snow covered Cascades.
What a joy it is to have a little blue sky and sunshine in the winter time. It is a special joy to see neighbors out and about enjoying the all too brief winter sunshine.
Covid Deaths in U.S.: 1,109,983 Get Vaccinated!Stop the War in Ukraine!
I am a visual recall person.
I have a very difficult time remembering anything that I only hear.
To compensate, all my life I have taken extensive notes. I write as fast as I can to capture what is being said so that I can SEE what is being said. By seeing what is said I have a chance of remembering the discussion.
Taking Notes is about capturing what was being said to me.
In developing KnowNow, I insisted on having a time stamped notes capability for our meeting notes features. I wanted to be able to directly link my notes through the time stamp to the video and automated transcript.
Greg Buckles is another note taker. He noticed that with the shift to time stamped meeting notes that are accompanied by a video with a transcript that he had transformed to MAKING Notes. Instead of just taking notes so that he could remember, the automated transcript could capture the full conversation. He could now listen and MAKE notes on what knowledge relationships were being triggered during the meeting. If an important idea was discussed that he wanted exact quotes for, he could hit the highlight button, and thirty seconds of the video and the transcript would be highlighted automatically.
As Greg described his transformation from TAKING notes to MAKING notes, I realized I made the same transformation.
When we added the Meeting Summarization capability, it was now easy to capture the MAKING notes and highlights and still have them in context of the full transcript for the exact discussion.
As I looked at my use of Goodnotes in conjunction with my Kindle Highlights, I realized I was doing the same thing. The ability to copy exactly the text from a Kindle book into Goodnotes was the same as the transcript. Then using the Apple Pen I could MAKE notes on what I am reading. I wasn’t just taking notes which were a summary of the book text, but I was making notes about relationships to what I was reading.
Covid Deaths in U.S.: 1,109,851 Get Vaccinated!Stop the War in Ukraine!
It is time. I have to cull 4,000 of the 5,000 business, technical and non-faction books that occupy our home.
I have not read a paper book in over six years. Yet, I keep thousands of books to surround myself with millions of words and images.
All of my reading occurs on the Kindle app on my iPad. I now have >5,000 Kindle books. One of the benefits of my Kindle books is that all 5,000 of them are on my iPad. They do not occupy a large percentage of the floor space of our house.
In the post “What is a book?” I provided several images of some of the many places we have our books tucked away.
Six months ago the universe sent us a message to clean up our mess when our ground floor flooded due to a burst water heater. We are almost through with our reconstruction.
The upper left image of book shelves space now looks like:
By cleaning our book mess up now, we are saving our children from having to do it in the next few years.
Before bringing all the books back into the house from the ground floor, I start my culling by going through the upstairs book shelves.
Somehow I have to cull 80% of these books to make room for the keeper books from the 3,000 books that are now in storage.
My first box of culls:
After culling to five boxes of books and keep a couple of 20% of the books, the first column of our bookshelf looks like:
Pull the books out. Check for any stray receipts or papers in the books and toss the stray papers away. Decide on whether to cull the book or keep it. If a cull, put the book in a cull box. Once a shelf is cleared then clean the dust from all the construction of the last six months. Move to the next shelf of books.
Decisions. Decisions. Decisions.
I am sure I have done harder things in my lifetime, but it is hard to part with 80% of my companions through ferry rides and hundreds of thousands of airline miles.
The hardest part of this work is not being able to find anyone to take my lifetime of books. I hate that these books will most likely end up in a landfill.
Teach One – Eric and I do reciprocal teaching by sharing different articles and then experiments with each other to see different ways that we could use ChatGPT. I like Eric’s test of ChatGPT by using the interview questions he does with junior software engineers.
Transform and Augment One – I envision transforming Jupyter notebooks through generative AI using Chat GPT and GPT 3.5 to generate the code sections of a Jupyter notebook from the English prompts in the narrative section of the notebook. I even speculate that you could generate Artwork, diagrams, and charts from GPT.
My recent examples of using the SoDoToTAO process are:
Realizing that the Microsoft Graph API would allow the development of the KnowNow search tool across any cloud based content source that had a search API and we could transform and augment the work of Knowledge Artisans.
Envisioning how ChatGPT and GPT 3.5 could lead to a powerful note taking and note making system.
It took me a couple of tries to find a prompt that would copy what Ulanoff demonstrated so I could “Do One” with ChatGPT. I assume that part of the training for ChatGPT was to ingest my blog posts on “On the Way to Somewhere Else.” For better or worse, this sounds like me and certainly resembles the way I generate personal blog posts.
write a blog post in the style of Skip Walter
Title: The Power of Language
In his book “Metaphors We Live By,” George Lakoff writes about the incredible power of language. He argues that our use of language shapes our perception of the world around us, and in turn, influences our actions and behavior.
For instance, consider the metaphor “time is money.” This metaphor is so deeply ingrained in our culture that we often use it without even realizing it. We talk about “spending” time, “saving” time, and “investing” time, as if time were a tangible commodity that could be traded or used up.
But what does this metaphor imply about our relationship to time? It suggests that time is a limited resource that must be carefully managed, as if it were a valuable asset that can be squandered or wisely invested. This metaphor encourages us to view time as a scarce commodity that must be carefully allocated, rather than as a limitless resource that can be enjoyed and savored.
The power of this metaphor is evident in the way we approach our daily lives. We rush from one task to the next, trying to “save” time and “make the most” of every minute. We measure our success by how efficiently we use our time, and we often feel guilty or lazy if we “waste” time on activities that don’t directly contribute to our goals.
But what if we were to adopt a different metaphor for time? What if we thought of time as a river, flowing steadily and inexorably forward, carrying us along with it? This metaphor would encourage us to view time in a more relaxed and fluid way, as something that can be savored and enjoyed, rather than something that must be managed and controlled.
The point is that our use of language can have a profound effect on how we think and behave. By choosing our words carefully and being mindful of the metaphors we use, we can shape our perception of the world in a way that promotes happiness and well-being. So the next time you find yourself using a metaphor like “time is money,” stop and think about the implications of that metaphor, and consider whether there might be a more helpful way of thinking about the concept. After all, the power of language is not something to be taken lightly.
The precursor to these three experiences was writing myself back into existence. Befire each need for SoDoToTAO I had stopped my daily writing. To get my synthesis and innovation brain moving again, I go back to my habit of daily free writing.
In 2019, writing myself back into existence led to the formation of Factor10x and hiring a great team to build KnowNow.
In 2020, writing led to testing out Grain.co video highlighting and realizing that in combination with GPT-3 with tools like Writesonic, we could create powerful meeting summaries with highlights from Microsoft Teams and Zoom.
Here I am again, going back to writing myself into existence at the same time that ChatGPT is released and seeing a new way of working by combining ChatGPT and an evolution of Jupyter Notebooks. Maybe the good folks at Curvenote will figure out how to make this happen.
Who knows where this SoDoToTAO journey will lead? Or as ChatGPT said in the style of Skip Walter:
“So the next time you find yourself using a metaphor like “time is money,” stop and think about the implications of that metaphor, and consider whether there might be a more helpful way of thinking about the concept. After all, the power of language is not something to be taken lightly.”
The 3D printed Nexus Sculpture captures the unique thing that happens when through the combination of artistic innovation and mechanization (3D printing) we can create things by computer and CAD that we cannot do in ‘nature’ or through craft. That is my fondest hope for what will happen with the platform of content with context’ in both the material world through the revolutionary innovation of distributed 3D printing and through a content with context platform for other forms of content and applications.
That company was ten years in my past and I couldn’t remember the name of the company or the name of the sculpture. So I took a photo of the sculpture and Google lens took me to the artist’s page. Nexus was the name. Creating a nexus of people, process and technology is my life’s work.
“Klein bottles belong to surfaceland. Kleinforms do not. While in surface topology the surface is not permitted to pass through itself, in tubular topology this rule does not hold. Kleinforms are not Klein bottles. In Kleinforms, at least as I invented and named them (Ryan 1971 1974 1993), self-penetration is permitted. It is this self-penetration that enables Kleinforms to map the duration of time that occurs in the process of video feedback, the process that led to formulating the Kleinforms. The continuity within the chamber of the tube is being rendered, not the connectivity possible on the surface of the tube.”
What excited me about Ryan’s relational circuit is using the circuit to compose educational videos. His Nature in New York city video is my favorite example of his video composition technique. I had hoped to use this technique to do video summaries of meetings in our KnowNow product.
Shortly after seeing these images, we visited the Carberry Glass studio of Ann and Morgan Seeley. They had several glass sculptures that reminded me of the fishing floats. The yellow striped version caught my eye as it reminded me of the sun glancing off the floats.
Seeing the floating glass art brings a smile each time I glance at it.
Japanese Zen Garden
The most interactive of my desk art objects is the tiny Zen garden. How it has survived for twenty years on my desk through several moves and four grand children is beyond me.
My tiny Zen garden reminds me several Zen gardens we visited while on a Japan Study Mission in the late 1980s. My favorite city on the trip was Kyoto. I was thrilled to finally meander on the Philosopher’s Path. As we left the walk we toured Ryoanji Garden. I was captivated by the simplicity and complexity of this calming space.
My small cardboard Zen garden is a constant reminder of the hours contemplating Zen Gardens.
Napoleon’s March and the Rosetta Stone
My desk art objects sit on a shelf in front of the larger images that provide a context for my daily work. After six months of being away from this space, I am about to reenter my home office. I can’t wait.
I can’t wait to stare at Napoleon’s March. This diagram with six variables is perhaps the best visual analytic I’ve ever experienced. Right next to it is the Rosetta Stone reminding me that my job is to translate between my experiences and the many languages of my colleagues and customers.
As I always do, I thought this technology advance would take off immediately. Yet, virtual reality has disappeared from the Gartner Hype Cycle.
To the best of my knowledge, nobody picked up on what I saw as the transformation from web pages (and printed pages) to PLACES being the future of the web. Instead all of the hype has focused on web 3.0, cryptocurrency and blockchain.
“Here’s a list of what’s coming to the Quest platform:
Microsoft Teams immersive meeting experiences for Meta Quest: Connect, share, and collaborate in Teams immersive experiences.
Microsoft Windows 365: Stream the Windows experience on Quest Pro and Quest 2 devices, and access your personalized apps, content, and settings in VR.
Microsoft 365 app experiences: Interact with 2D content from Sharepoint or productivity apps like Word, Excel, PowerPoint and Outlook directly from Quest Pro and Quest 2.
Microsoft Teams/Workrooms integration: Join a Teams meeting from inside Workrooms.
Meta Avatars in Microsoft Teams: Use your Meta Avatar in Teams for whiteboarding, brainstorming, and meetups.
Microsoft Intune and Azure Active Directory support: Enable enterprise security and management on Quest Pro and Quest 2 devices.
“Notably, Windows 365 gives business and enterprise users access to a version of Windows 10 or Windows 11, streaming from Cloud PCs to the user’s web browser.
“The partnership is ostensibly building on Meta’s early steps towards virtual offices with Horizon Workrooms, something Meta says will help make Meta Quest Pro “an enterprise-ready device that’s easy to use, deploy, and manage at scale.”
As I watched many of the announcement videos, I was reminded of my blog post. I was also reminded about the Covid Pandemic and how quickly many of us shifted to remote work. I thought about Oldenberg’s Great Good Place and his articulating the three places of home, work, and a neutral social place (church, Starbucks, clubs …). Then I thought about my experiences over Zoom and Microsoft Teams the last four years.
In an instant, I realized that all three of my “places” had collapsed into my physical home and my digital screens. Being immune compromised most of my activities are now in my home. Work is completely on the digital screen. Most of my social activity except with family and a few friends is done virtually. I’ve attended wakes and family reunions and cocktail hours on Zoom.
What does it mean for my well being when each of my three places consists of my physical home and my digital work and social places?
In the process of doing research for this post I came across Third Space Theory. A brief overview of Third Space Theory is in a previous post (thank you Writesonic and GPT3). The original third space theory is about cultural identities. However, the first paper I read was about third space theory in education which talked about three spaces – physical, mental and networked (a hybrid connecting physical and mental). This description resonated as a way of thinking about my collapse during Covid of Oldenburg’s Three Places. It also provided a framework to think about the three places that can emerge from Envelop VRs original vision of connecting personal virtual spaces.
The Third Space Theory describes the challenges I’ve experienced with the collapse of the three spaces. In addition to Covid, due to a burst water heater in our ground floor, we’ve been restricted to only half of our house for six months. Even our home space collapsed. Now most of my place time is virtual and digital sitting in front of my two computer monitors.
As I delved more deeply into Third Space Theory, I came across Soja’s interpretation described as the trialectics of being.
“The trialectics of Spatiality, Historicality and Sociality (summary terms for the social production of Space, Time and Being-in-the-world) apply at all levels of Knoweledge formation, from ontology to epistemology, theory building, empirical analysis, and social practice)”p.71 “There has been a persistent tendancy during at least the past century to over-privilege the dynamic relations between the “making” of Historicality and the “construction” of social practices or sociality”p.71 This”pronounced tendency in Western pilosophy, science, historiography and social theory to bifocalize on the interactive Hisroticality and Sociality of Being”, its tendency representing modernity.
There were too many obtuse concepts in these papers and books on Third Space Theory. As I dug some more, I came across a diagram I liked better:
“Firstspace epistemology” focuses on analyzing and interpreting the text of physical objects in perceived space . Firstspace epistemology holds that spatial practice and material geographies are the result of history and sociality. “Secondspace epistemology” focuses on obtaining ideas from the conceived space and projecting them onto the world of experience. This type of epistemology breeds a wide range of philosophical hegemonies and utopian thinking. “Thirdspace epistemology” focuses on making endless sympathetic deconstructions and heuristic reconstructions of perceived space and conceived space by injecting new possibilities into the “lived space”. In this way, the heterotopia will spiral and result in an infinite approach to the state of utopia. In Thirdspace epistemology, real space has both subjectivity and objectivity, abstraction and concretization, reality and imagination, known and unknown, repetition and difference, construction and deconstruction, thought and body, consciousness and unconsciousness, singularity and diversity, daily life and endless history.”
The Firstspace comes in through our five senses of perception. The SecondSpace is my mental construct of what I am perceiving. I interpret the Thirdspace to be the physical world that I move around in. As I reflected on this model, I believed that it was a better framework for the collapse of my three Oldenburg places. I am living most of my life at the moment in First and Second Space.
Is this willing suspension of disbelief what I am doing every day in my Microsoft Team meetings? Is this how I cope with less time being spent in the physical lived world and more time spent in the digital lived world?
To get away from these places and spaces thoughts, I opened up an article about the Hidden Barn distillery in Kentucky – “This Bold New Distillery is Rewriting the Rules of Bourbon.” As I perceived the people and place through the written word with a few photos, I could conceive of the place. I immediately started searching for everything I could find about about Hidden Barn bourbon. I tried to order a bottle so that I could have a better sensory experience of their product. Alas, nobody will ship Hidden Barn bourbon to Washington State. I’ll just have to travel to the actual place. For the moment I will enjoy my conception of this distillery.
I was gob smacked. I had just read a “page” that transported me to a conceptual place that I had never been before. I realized that I had been conceiving of “pages and places” as an either/or. Instead, “pages and places” are a both/and. Pages and places are complementary. I remember all the times when we travelled both pre google and post mobile google. In the good old days we had physical guide books (pages) that we referenced as we were experiencing a place. Today, we pull out our cell phone and search for specific information about a place. Or even better, using Google lens we take a photo of the place and then have Google give us specific pages about our location. Tomorrow augmented reality glasses will provide us the information just by our looking at the place.
As I write myself back into existence and contemplate what is next for my third act of a long and mostly happy life, I now know it will involve exploration of pages, places, and spaces and the relationship of the physical, digital, and virtual worlds. This exploration will involve lots of experimenting with the new worlds of my work tools (Microsoft 365) embedded in the virtual worlds of Meta.
Covid Deaths in U.S.: 1,107,794 Get Vaccinated!Stop the War in Ukraine!
Mark Twain once said, “I didn’t have time to write you a short letter, so I wrote you a long one.”
Several times over the last twenty years, I needed to write myself into existence. When I remember to start writing again, I find that I generate insights and connections that I otherwise missed. When I get the process moving, there is a natural flow to what and how I write.
I start out with free writing where I just let the random thoughts flow. Some of these thoughts are related to what I am working on or questions that colleagues have asked. I will select some of these thoughts to write a bit more formally in an email. For those emails that turn into a conversation, I will then generate an internal company blog post. As the ideas solidify through many conversations and white board sessions, I will then do an external blog post either to my personal or company blog. Depending on the reception and the conversations that the public post elicits, I will then generate a white paper.
At each stage of this process, I am more explicit about the organization of the content. Yet, the more important part of the evolution is becoming more conscious of the audience. The audience for my free writing is myself. The email audience is one or a couple of people. The internal company blog post is for my colleagues. The external posts are for a general audience which is often still vague. The white paper is for a specific external audience that I want to influence in some way. [NOTE: The TAI Group provides a methodology and coaching for how to identify and communicate with your target audience through their “Who’s There?” Platform.]
The evolution of the writing gets more explicit about the focus of the writing, the organization of the content, and the audience.
The content evolution in Knowledge Management (KM) systems follows a similar process. As you might imagine the real costs of a KM or a Personal Knowledge Management (PKM) system is the maintenance and evolution of the original thoughts and experiences.
What follows is the fourth stage of writing about my scattered thoughts on KM. These notes went from free writing to an email to an internal blog post to now a public (on my personal website) post.
In keeping with my personal WHY of creating order out of disorder, I am often asked by colleagues and folks that I mentor to provide a starting point for learning about a topic. Jason Velasco has asked me a couple of times to provide my thoughts on Knowledge Management in general and as a way to help his company with client consulting on KM. I’ve provided him with several articles and have described several times in Zoom meetings many of my thoughts (and rants) on KM. He asked me to take another shot at summarizing my thoughts.
In addition, one of the driving forces behind KnowNow is to create the best Personal Knowledge Management (PKM) system for knowledge workers.
I have shared many times that my basic problem with the term “knowledge management” is that it is a mis-categorization if you go to the definitions of what Knowledge is – biologically and philosophically.
On the philosophical side, I like Ackoff’s definitions which I have written about and even been quoted in Too Big to Know by David Weinberger (WUKID Extended ) [NOTE: This blog post references a previous post which has additional references and information] :
Since I met Russ Ackoff thirty years ago and learned about WUKID, I strive to advance from data to wisdom in my learning journeys. The journey to WUKID is a driver for my SoDoToTAO process. Russ gave me the practical definitions of the stages of learning:
Data – the raw stuff of the world. It could be a temperature reading 67 degrees or the price of a book or any of the raw things that we encounter each day.
Information – provides structure for data. A weather report puts the temperature (data) in context. The outside air temperature in Seattle, WA on July 10, 2007 was 67 degrees at 2 PM and the sun is shining. Each of the components of the previous sentence is data put together to form a glob of information.
Knowledge – is actionable information. Given the above weather information string I would know that it is going to be a nice day but cool for that time of year, so I would carry a light sweater or jacket if I were to go outside.
Understanding – is seeing patterns in knowledge and information. If the above weather string were combined with 20-30 days of similar strings of information and I had lived in Seattle for 10 or more years, I would be able to see a pattern of it being a cool summer. Understanding has a longer time component than information and knowledge. Understanding incorporates double loop learning as described in Schon’s The Reflective Practitioner.
Wisdom – is going beyond the levels of understanding to see larger scale systems and be able to predict behaviors and outcomes in the longer term future (5-15 years) based on seeing the patterns that arise through understanding. When lots of data over many years was refined into information, knowledge and understanding patterns, scientists were able to see long term weather patterns like el nino and la nina. Based on these patterns weather forecasters can predict longer term trends in Seattle and I can act accordingly.
I loved it when the Gapingvoid artists drew their image of WUKID and then added Impact:
What I love about these images from Gapingvoid is the text they provide to share the context behind the image:
“Semiotics is the study of the signs and symbols that imbue meaning and meaningful communication. It includes the study of signs and sign processes, indication, designation, likeness, analogy, allegory, metonymy, metaphor, symbolism, signification, and communication.
“Semiotics has been recognized as having important anthropological and sociological implications by many of the greatest innovators behind the respective disciplines. Umberto Eco proposed that every cultural phenomenon may be studied as communication; Clifford Geertz, arguably the 20th Century’s most respected anthropologist, once wrote that ‘culture is the culmination of symbols that transmit behavior’.
“It is with these thoughts in mind that Gapingvoid started to study how to impact organizational outcomes by informing culture, mindset, beliefs, and behaviors through semiotic tools.
“We have consistently witnessed and measured how people’s perceptions can be shifted through meaning attached to objects. Back in 2005, we popularized the term ‘social object’ with semiotics in mind: the idea that objects spread ideas faster and more effectively than nearly any other medium We see daily examples in internet memes and Red Baseball Caps, which have come to represent an entire political ideology.”
What I like about the cartoon is adding the “so what” piece to WUIKID – so what does KM do to create an impact? Their cartoon illuminates outcome based thinking as well.
On the biology side we have the work of Orna and Lakoff and Johnson. These authors look at how external information crosses the barrier to become a part of our biology.
Elizabeth Orna in Making Knowledge Visible: Communicating Knowledge Through Information Productsdescribes the process whereby information is transformed into knowledge and vice versa. She claims that information lives in the outside where it becomes visible and available to others and be able to feed their knowledge. But knowledge is not something that I can give to someone else, because information has to be transformed into something that lives only in a human mind. We are constantly and generally invisibly transforming information into knowledge and back into information for others to consume.
Lakoff and Johnson in Metaphors We Live By and Philosophy in the Flesh go further in their distinctions between information and knowledge claiming that knowledge only results when we have a physical body that can sense and act in the world.
Orna’s image of the cycle between information and knowledge (see below) – information is outside the skin of an individual and knowledge resides inside the skin.
If you pay attention to these biological definitions, then the key bit that is missing for a computer to be able to have knowledge reside within is to combine the advances in AI and ML with effectors and affectors (the robotics and IoT world). We are getting much closer with these technologies but the key bit is that the computer has to be able to have the equivalent of our five senses (sight, smell, touch, hear, taste) AND be able to then operate on the world (affecters) to be able to interact with another human’s five senses.
We are getting close with all of the advances in AI/ML and robotics and very human complex things like driving a car. But we are not there yet. The conversational advances with tools like Alexa and ChatGPT are another key along with explainable AI and ML.
“Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.”
“The relative ease of fooling these neural networks is worrying. Beyond this lies a more systemic problem: trusting a neural network.
The best example of this is in the medical domain. Say you are building a neural network (or any black-box model) to help predict heart disease given a patient’s records.
When you train and test your model, you get a good accuracy and a convincing positive predictive value. You bring it to a clinician and they agree it seems to be a powerful model.
But they will be hesitant to use it because you (or the model) cannot answer the simple question: “Why did you predict this person as more likely to develop heart disease?”
This lack of transparency is a problem for the clinician who wants to understand the way the model works to help them improve their service. It is also a problem for the patient who wants a concrete reason for this prediction.
Ethically, is it right to tell a patient that they have a higher probability of a disease if your only reasoning is that “the black-box told me so”? Health care is as much about science as it is about empathy for the patient.”
From a definitional stage, my assertion is that machines are still not capable of creating, storing or producing knowledge – yet. We are getting close, but not there yet. Yet, the field of Knowledge Management has been around for at least 40 years. So what are they talking about. My assertion is that we are still talking about Information Management, not Knowledge Management. Through things like the web, Google search, and Youtube an incredible amount of KNOW HOW is able to be shared. But the user still has to verify it and test it and internalize it for that information that is on the web to become knowledge and their own Know How internal to the human.
The classic one-line definition of Knowledge Management was offered up by Tom Davenport early on (Davenport, 1994): “Knowledge Management is the process of capturing, distributing, and effectively using knowledge.” Probably no better or more succinct single-line definition has appeared since.
A new definition of KM
A few years after the Davenport definition, the Gartner Group created another definition of KM, which has become the most frequently cited one (Duhon, 1998), and it is given below:
“Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise’s information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers.”
The one real lacuna of this definition is that it, too, is specifically limited to an organization’s own information and knowledge assets. KM as conceived now, and this expansion arrived early on, includes relevant information assets from wherever relevant. Note, however, the breadth implied for KM by calling it a “discipline.”
Both definitions share a very organizational and corporate orientation. KM, historically at least, was primarily about managing the knowledge of and in organizations. Rather quickly, however, the concept of KM became much broader than that.
A graphic map of Knowledge Management
What is still probably the best graphic to try to set forth what constitutes KM, is the graphic developed by IBM for the use of their own KM consultants. It is based upon the distinction between collecting stuff (content) and connecting people.
I like the notion of the 4Hs:
Or translated into more IT Terms:
Communities of Practice
I like the stages of evolution of KM within an organization:
Stage 1 of KM: Information Technology
Stage 2 of KM: HR and Corporate Culture
Stage 3 of KM: Taxonomy and Content Management
At least in this article they recognize several issues:
Tacit knowledge (what is in the individual)
Knowledge Retention and Retirees (I would also add onboarding)
Scope of KM has greatly increased
What I like about this article is that it captures the range of what is talked about as KM. Although in my estimation we are still talking about Information Management.
Challenges with implementing KM
A couple of examples of KM systems that I’ve built or managed are Attenex Structure and Conga’s Contract Lifecycle Management system.
The purpose of Attenex Structure was to provide a clause database for transactional attorneys so that they could quickly construct transactional documents like NDAs or software licenses from the best practices of a law firm or a corporation (Microsoft, 3M, Fair Isaac). The system was a KM system that had all of the components of the 4Hs and the equivalent of my presentation on CLM.
Even though CLM is a specific form of KM, I hope that you can see the relationships with the more general article on KM and the 4Hs along with their stages of implementation.
What is not obvious in looking at these definitions, capability maturity models and stages of evolution is some of the challenges that are faced.
Challenges to Creating Knowledge Management Systems
The following is a partial list of the challenges we faced in both developing and implementing these systems. I will use the 4Hs as a framework for cataloging some of the challenges:
This is probably the single biggest challenge for an existing organization. There are already terabytes of information that need to be sifted through to find examples of knowledge.
One of the many reasons I selected both Structure and Patterns at Attenex is that I thought Patterns would be a great way to Harvest the information from an existing organization’s mountains of contracts. It is also why I proposed to Conga that they buy Ringtail to easily do this Harvest step.
Without an eDiscovery like enterprise search capability the very first step fails. Advanced eDiscovery (E5 MSFT 365 license) and KnowNow can relatively easily do this step today.
Which then leads to the second big challenge which is the existing examples are all specific to a particular need. They are not general nuggets of knowledge. They are only a specific instance.
If we take MSFT or Amazon AWS as an example, they each have 100s of thousands of examples of proposed and negotiated software licenses. But they are all specific.
To turn the specific examples into general templates takes 100s to 1000s of hours of your highest caliber legal talent to find the general cases of terms (need agreed to definitions of the legal phrases – not an easy process with a group of lawyers), sentences, clauses and legal form collections of clauses. Then you have to assume that many of the clauses will want to be negotiated by the other party. So these clauses have to have pre-defined fallback clauses in case the other party pushes back. And on and on in an infinite regression to get to shared content that the rest of your organization will actually use.
Which then leads to the third challenge – users and lawyers are still going to want to customize the standard content or clauses. So a part of the harvesting process is to create knowledge that can be used as a starting point AND then capture how AND WHY that content was changed for a specific need. Then this content has to be recycled – the actual change and the reason why – back to the Content Keepers to decide whether to update the knowledge base nuggets. This process is not unlike the two pieces of Wikipedia – what the user sees and then the edit and discussion history behind all changes to an existing page. Example: Ackoff Wikipedia Page and Ackoff Page View History.
The assumption that many developers of KM systems have is that once the knowledge is harvested then it is a good way to find the people that are knowledgeable about a given topic.
The first challenge with Harnessing is to keep not just the knowledge examples that were found in the harvesting, but also the people that created the knowledge nuggets in the first place along with their imprimatur. When I access a clause from the databank, I will want to know if it was created by a senior partner in that specific practice (IP, Litigation, Trust and Wills, Patents ….) or whether it was generated from an associate. Further, I would like to know if that clause has been used in any specific contracts and then be able to look at the full contracts to see the context of the clause usage. This need leads to many challenges like are their ethical walls and does the person trying to use the information have the permissions to look at given client examples. Just keeping the ethical walls and client access up to date is a huge challenge in and of itself.
The second challenge is that at some point you need to be able to capture the expertise of the person that created the knowledge nugget. Whether that is from a resume or an internal bio, those resumes are often inflated. More junior people typically claim that they are competent in every aspect of the law. What you really want to know is not whether the knowledge creator claims that they are an expert, but you want to find some evidence of their claimed expertise. Which leads us back to the Harvesting challenges, not only do you want to find examples but you want to find out how many examples per person in your enterprise were found. This is similar to trying to judge the expertise of a medical provider. Each time I have had a surgery I desperately want to know how many times has this particular surgeon performed this operation, on what types of people (my sex and age), and what were the outcomes. As it turns out this information is actually available (through Medicare and Medicaid reports), it just isn’t available to me as a health care consumer.
The third challenge is can I actually access the expertise of the person I seek. Is the person available and accessible? How much will it cost? How will the provider of the knowledge be compensated?
The fourth challenge is what kind of knowledge am I really looking for? Am I an expert and I just need the specific text of a clause? Or am I a novice and I actually need many hours of tutelage and consultation – that is, I am actually looking for training or mentoring?
The fifth challenge is does the person who is being sought out for knowledge keep a record of their requests to that they can feed that back into the system to create a demand for formalizing additional topics and nuggets into the knowledge base?
Most KM systems are built to retrieve specific things. Wikipedia is a good example. I am searching for something and a wikipedia entry shows up and I usually will go to it to get a starting point. But I would never think of browsing Wikipedia – there is just too much stuff. Similarly with Amazon, I go there to look for specific books and I love some of the recommendations, but even after 27 years, Amazon is not a replacement for browsing in a library. In a library I can go to the section of books that is close to what I am interested in and then browser for interesting titles.
The first challenge is how do you make it easy to browse?
The second challenge is how do you provide serendipity without angering the user. Sean McNee in his PhD dissertation on recommender systems for computer science articles studied the art of list making when returning search results to the user. What he found is that the first 2-3 results had to be items that the user KNEW would be directly relevant to their search request. This was the user’s way of “trusting’ the recommended results. Then about the third entry the recommender system could insert something that was relatively novel or serendipitous to the search request. Depending on the particular search system, it would track whether the user only selected those items that directly met the search results or the serendipitous ones. If serendipitous, then the system could put a few more into the next requests.
The third challenge stems from how to help the user think outside the box through “oblique knowledge.” This article by Phillipe Baumard provides a philosophical discussion of the types of knowledge.
The fourth challenge comes back to the reward system for pursuing serendipity. For a period of time DuPont in conjunction with Charley Krone’s consulting changed their compensation system to reward executives and managers for pursuing activities that were learning activities rather than just repeating tried and true practices. The reward was for the pursuit, not whether it actually resulted in an innovation. Google is relatively famous for the 20% time at discretion of the employee. The key part of this challenge is how does this learning make ita way back into the knowledge base. It is hard enough to get people to writeup their successes. It is impossible to get people to writeup their failures. This failure of results was one of the goals for the scientific Journal of Irreproducible Results. It is also the intent behind creating systems to track and analyze lost order reports.
Over the years many attempts were made to create “spaces” for creating new things. The Chautaqua Groups were attempts to bring people together starting in the 1800s. Large events like CES are attempts at bringing a whole industry together to create places to understand where an industry is today along with seminars for where the industry is going in the next couple of years. LegalTech is a small example of this. With the advent of Covid, these shows have had to move virtually. The hardest thing for these new virtual show systems to deal with is the networking component (both formal and informal) of a trade show.
The first challenge is just to get the right people into a space.
The second challenge is to record the conversations so that they can become early warning systems of trends.
The third challenge is to capture who is meeting with whom and construct the social networks that matter.
The fourth challenge is separating out what is immediately useful from what is useless to what might be useful in the future.
The fifth and perhaps biggest challenge is how do the attendees bring back their experience of the industry “knowledge space” to their colleagues.
Late Breaking News
Last week OpenAI announced the availability of ChatGPT. I suspect that 1000s of person hours were lost this week exploring this very engaging technology. One of the potential uses of ChatGPT is to replace Google search. Instead of your search requests generating 1000s of URLs to have to wade through, ChatGPT provides a narrative summary of the information:
“In layperson’s terms, ChatGPT is a lot more human than prior search engines, albeit with a supercomputer’s wealth of data — think Scarlett Johansson in “Her.” For instance, users who Google “what is the maximum dosage of vitamin D per day” simply received a link to HeathLine.com. However, when they posed the same question to the AI, it formulated an in-depth dissertation, the Times of London reported.”
I asked ChatGpt to define “knowledge management”:
“Knowledge management is the process of creating, sharing, using and managing the knowledge and information of an organization. It involves identifying, capturing, evaluating, and distributing the knowledge and information that is generated within an organization, as well as incorporating external sources of information. The goal of knowledge management is to enhance an organization’s ability to learn, innovate, and adapt to change. This can be achieved through various means, such as creating a knowledge-sharing culture, implementing knowledge management systems and technologies, and encouraging collaboration and communication within the organization.”
Asking ChatGPT was a lot easier than writing this blog post. I ask my audience which is more useful.
I don’t see any evidence that those systems that purport to be about Knowledge Management are actually dealing with knowledge. I assert that they are still Information Management Systems.
Most of the activity I see that purports to be about KM only deals with the Harvest Quadrant. As Greg Buckles pointed out there are many software systems that have been developed for the other quadrants but they don’t seem to be successful (particularly the expertise location).
KnowNow has aspirations to be a platform that can support all of the 4Hs. But right now we are mostly in the Harvest quadrant. Assuming we are successful I am very interested in using XR (VR and AR) to go after both tacit knowledge and the other three quadrants (Harness, Hunting, and Hypothesize).
Covid Deaths in U.S.: 1,107,794 Get Vaccinated!Stop the War in Ukraine!
Six weeks ago we had a miserable experience at the Virginia Mason (VM) Emergency Department (ED) in Seattle, WA. Today, was much different – fortunately.
My wife was experiencing abdominal pain. She went to the Bainbridge Island same day clinic and they referred her to the VM ED in Seattle to get a CT Scan. They thought she might be having an appendicitis.
We registered at the front desk at noon and described the symptoms. Somehow a series of misunderstandings started as they recorded that she was constipated. The misunderstanding led to a long unnecessary wait.
The ED waiting area at VM is a miserably small room that is poorly ventilated. Even though we were masked I was quite worried that Covid was alive and well and that we would be exposed.
After waiting mostly patiently for seven hours, I’d had enough and started complaining. The response from the ED personnel was obnoxious to say the least. “Don’t you know that the health care system is in crisis. Welcome to Health Care 2022. We are short staffed. Deal with it.”
It took a lot of restraint to keep from letting loose a torrent of foul language. But my better angel, calmed me down and I just stared.
The complaint was enough to get the ball rolling and within thirty minutes we were admitted to the ED. She was fine with just having a bed in the hallway instead of a private room. She was clear with the staff that anything to be seen sooner was better than waiting. Twenty minutes later my wife had her CT Scan.
The good news is that the CT Scan showed a problem that could be solved with antibiotics. Within another thirty minutes we were on our way back home. We were exhausted, hungry and angry at “Health Care 2022.”
We followed up the next day at our local VM Clinic. The good news was my wife’s abdominal pains were already subsiding with the antibiotics. The bad news was that the CT Scan showed that there was a growth on an internal organ. Even though there was bad news, modern medical technology discovered a problem before it became a health crisis.
An appointment was arranged with a general surgeon at VM. From this moment on, we had a completely different experience of VM in Seattle. With an appointment and a formal referral, rather than an “emergency”, the health care providers acted with efficiency, effectiveness and caring.
Surgery was scheduled for mid-December and then rescheduled for early December. Each employee that we interacted with was polite and caring. The surgeon was terrific and sat with us for a full thirty minutes answering all of our questions.
Today was the surgery date. While we had an offer to take the earliest appointment, getting to Seattle on a ferry is difficult to manage at 6:00AM. So we took the noon pre op check in time.
At the appointed time my wife was taken to preop and I was given a patient number to track her progress through the surgery stages.
The surgeon called after the surgery and let me know that things had gone well and what would happen next. An hour later the recovery nurse called me and gave me an update on what they were doing. In the meantime, they had moved my wife to a consolidated recovery area in a different building for all the late in the day cases. I wandered over to the new waiting area and waited for discharge to occur.
I am deeply grateful to all of the health care professionals who took care of my wife and made sure that I was kept up to date on what was happening. Even with the Covid restrictions and not being able to be with my wife during recovery, I still felt a part of the process. This long day was how I would expect the health care system to work.
We were at the same hospital in Seattle, but had vastly different experiences. I sure prefer our experience with a scheduled visit rather than the ED emergency experience six weeks ago.
I hope that all of our future interactions with Health Care 2022/2023 will be scheduled.
Covid Deaths in U.S.: 1,106,860 Get Vaccinated!Stop the War in Ukraine!
David Robinson asked me “it seems like you have hundreds of frameworks or models that you bring to our discussions. What is your core process that is underneath your mastery of all of the models you use?”
The earliest experience I could remember using the full model was the summer of 1966 when I worked construction in Hendersonville, NC. I was hired by Godley Construction company to help build a Baptist Church. One of our tasks was to tie the reinforcing steel components for the concrete lintels above what would be 24 large stained glass windows.
The first lintel tied steel that we did took two of us three full days of backbreaking work. The construction supervisor showed us how to do it (See One) and then the two of us did it (Do One). As we started on the second one, I knew there had to be a better way. A part of the better way was sitting while we did the work. I did not realize that sitting on a construction site was something that one never did. After the construction foreman, chewed me out for fifteen minutes, I showed him how much more productive we were. He thought for a few minutes and then said “OK, I will let you do it your way. But if the big boss from Charlotte, NC, comes up to observe I want you to go back to doing it my way.”
My colleague and I settled in and after continuous improvement to our new process we were producing the reinforced steel lintels at 2.5 per day instead of the one every three days. I was really excited that I could transform and augment (TAO) a work flow to increase productivity. However, there was an unintended consequence of my ingenuity. We finished much faster than the project plan called for and there was no more work for me. I was laid off two weeks before the end of the summer work. Oops.
Now that I had one example, I remembered that my experience sounded familiar to the story in one of my favorite childhood books from 1959 – Danny Dunn and the Homework Machine. Danny and his two high school friends decided they would use the “professor’s computer” to do their homework. They thought getting the computer to do their homework would be easy. Little did they know that in order for the computer to do their work they first had to understand the homework (See One. Do One.). Then they had to teach the computer how to do it (Teach One).
The Professor leaned his bull fiddle against a chair and came to see the report card. “Top marks,” he said. “Then why so growlsome looking? Are you unhappy because school is over?”
“What?” said Danny. “Do you think I’m crazy?”
Mrs. Dunn folded her arms. “Daniel Dunn,” she said, in her no-nonsense voice. “Out with it! Something’s wrong.”
Danny kicked at the rug with one toe. “Oh…well you know how hard we worked on the computer so we wouldn’t have to do any homework?”
“Well, Miss Arnold just broke the news to us that we’ve been doing homework all along. In fact— we got special honors for doing more and harder homework than any of the other kids in class!”
Mrs. Dunn began to bubble with mirth. Dr. Grimes came out, raising his eyebrows.
Professor Bullfinch coughed, and said, “I wondered how long it would be before you found that out. Naturally, in order to feed information into the computer you had to know it yourselves. And in order to give the machine the proper instructions for solving problems, you had to know how to solve them yourselves. So, of course, you had to do homework—and plenty of it.”
“Why, simply programming a problem is homework,” Dr. Grimes put in.
“Yes. It just never occurred to me before,” Danny confessed. “Gosh, it — it somehow doesn’t seem fair.”
“It wasn’t really fair for you to expect a machine to do all your work for you, was it?” Mrs. Dunn said, gently. “That’s why I suggested to Miss Arnold that she give you high school homework to do.”
Williams, Jay. Danny Dunn and the Homework Machine . Wildside Press LLC. Kindle Edition.
In the process of teaching the computer to do their homework, they did a Transform and Augment One (TAO) as well.
In 1968 I discovered computers and software programming in my freshman year at Duke University. In parallel to my class work, I took a part time job in a psychophysiology lab at the Duke Medical Center studying the effects of different tranquilizing drugs on heart rate. My job was to take the paper records of heart rate from the Grass Polygraph and manually score the beat by beat heart rate during the many 90 second trial periods of the experiment for each of fifty subjects. This was dull mindless work once I figured out how to do the scoring (See One. Do One.).
The hidden benefit of taking this job was that they were getting a PDP-12 computer to augment and automate this manual work. Given my interest in programming I was selected to go to Digital Equipment Corporation’s Headquarters to learn all about the PDP-12. Once we took delivery of the computer, I quickly programmed the computer to run the experiments AND to analyze the results in real time. I no longer had to do the hard work of manually scoring the heart rate beat by beat. I had done my first digital transformation by programming the computer (Teach One) to do what I had mastered. In the process, we did a Transform and Augment One as well for conducting the entire experiment.
I had found my core process that was uniquely mine – SoDoToTAO – See One Do One Teach One Transform and Augment One.
This SoDoToTAO is so much a part of me that I do it now without thinking about it.
In 2014, I was asked by Gifford Booth of the TAI Group to see if I could design an automation of their executive coaching practice. They did excellent work in coaching executives of Fortune 1000 companies on how to communicate and lead more effectively. They did such a good job that their executive clients asked them to scale their process for all the managers in their companies. However the corporations were not willing to pay executive coaching rates for the thousands of folks that needed the training. The clients asked TAI to figure out how to automate the coaching so that it could be scaled.
I suggested that if I could spend a week Seeing One and Doing One then I could put an automation plan together. During our first week together I experienced the coaching process myself for their introductory course – Power and Presence. I was amazed at how much better a presenter I became after just an hour of coaching. I then observed five different coaches with 20 different client professionals. I realized that TAI had codified their practices in order to train their thirty different executive coaches in the US and Europe. They shared with me their 25 different categories of communication dysfunction and the 20 interventions to improve the dysfunctions.
I took every presentation opportunity to practice what TAI taught me. I captured the doing of the work in the blog post “Wake Up!“. Yet, I realized that in order to get to the Transform and Augment One phase I needed to teach the method. With friends and colleagues who volunteered for the training, I then did several one day sessions teaching what I had learned from TAI. I gave all the caveats that I was not a trained instructor and that if they wanted the full benefit, they needed to attend a TAI seminar. However, teaching the process was exactly what I needed. I got to observe a wide range of professionals with different dysfunctions and select an intervention that would help them improve.
With the SoDoTo experiences, I was now ready for the Transform and Augment One part of the process. I hypothesized that we could do the observing and feedback if we could do 360 video recording of the subjects and then have them see the recordings in a VR headset. We did some quick prototypes but we were about ten years ahead of what the technology could do. It is just now with the Meta Quest Pro headsets that we could implement what we envisioned eight years ago.
More recently, I described the SoDoToTAO that the video highlighting tool, Grain, provided to our user research capabilities (Observe Don’t Ask series Part 5). Now that so much remote work is done through Zoom and Microsoft Teams and it is easy to record sessions with users it is easy to provide feedback to the development team. Grain takes a one hour process to create a single video highlight and transforms it into a two minute process. I raced through the SoDoTo process with teaching myself and others how easy Grain was to use. Once we had everybody using Grain, it was easy to Transform and Augment (TAO) our development feedback loops.
While I was delighted to get to my core process and answer David’s question, I realized that there was still a problem. I could not answer what the trigger was for me to start the SoDoToTAO process in the first place. What caused me to See One and then continue with the mastery process? Somewhere inside of me is a selection process for why a given model, or facilitation technique, or new software tool is worth me mastering.
I still have more homework to research and reflect so that I can answer David’s question more completely.