Observe, Don’t Ask. Show, Don’t Tell. Part 5

Day 281 of Self Quarantine      Covid 19 Deaths in U.S.:  311,000   GA Vote!!

In the first blog post in this “Observe, Don’t Ask” series, I shared an “ah hah” moment about the use of video ethnography in the software product development process.

While mentoring Brandon Fleming, CEO of Chimerocyte, a biotech startup, I encouraged him to use video ethnography while he observed how the lab test that he is commercializing is performed today.  He did the video work and is starting to analyze the video through an extension of the AEIOU method of qualitative observational research.

In our coaching session, Brandon asked “now that I have this video, how do I translate it into a good user story?  How do I write a powerful agile user story?”  In my “bad Skip” coaching style I immediately started yelling and swearing.  Not at Brandon, but at the realization that I had never talked about or written about the next step after video ethnography “showing the video to the software engineers.”  I thought it was obvious.

I reached out to several user research and design research colleagues, to understand how I had missed this connection.  Chris Conley, AbundantProfessional, who I first met 25 years ago at the Institute of Design responded:

“One of the hardest things to do is to help people make sense of interviews and observation! They think there is some sort of special technique.  They somehow abandon their own pattern recognition skills they apply every day to summarize and frame things.  🙂

“Usually I have them write out the top seven to ten things that stood out for them and then have them find examples from participants that illustrate the point to make sure they have evidence.

“Then, I have them group the top ten things into 3 or so clusters and name them as sub themes.

“Then I ask them to create an overall theme that summarizes the essence of those three areas.

“It always depends on what they are working from, but the idea of making a list of priority issues or needs, making sure you have examples from participant experience and making a little hierarchy of points with theme names seems to get them 80% of the way there.”

While Chris didn’t exactly answer the question I was asking, he provided the answer to the question I should have asked “how do you make sense of your user research?”  Chris jumped right to the insights phase of making meaning.

Project Spaces for insight generation

As Professor David Socha and I worked through a model of the software product development process as described in the previous blog posts, I realized that with a current consulting engagement I had a chance to apply our framework.  I wanted to use video as much as I could in this process to see if it makes a difference.  Since I was doing this work pro bono, I felt comfortable in using the consulting work to feed our research.

The client project is to build a faceted federated search tool for cloud based data sources.  The client has an existing product so the first part of the work is to move the existing product onto the cloud.

Working with a former superb software engineering colleague, we sat down to plan the project.  His first question was “where are my user stories and personnas so I can begin the development?”  Like with Brandon, the “bad Skip” erupted in ranting and raving.  My colleague smiled and we quickly transitioned to laughing.  We realized that it had been two years since we worked together and we started back into easy collaboration as if the two years apart didn’t exit.

I realized that I had not shared with my colleague the new framework that David and I are developing.  I painfully realized that I hadn’t done the blindingly obvious which is prepare a collection of video highlights so I could show him what was needed rather than tell him.

In a couple of recent blog posts, I described how I am experimenting with video content generation and analytics tools like Zoom, Grain, and Otter.ai:

Now that Grain has made it almost trivial to generate video snippets (highlights), I wanted to breakdown an extensive demo of the current product into a series of feature highlights.  Each feature highlight video would then become a “feature card” that would be the starting point for a collection of videos highlighting both a definition of the feature AND a series of pointers to user research highlight videos illustrating users employing the feature.

[NOTE: While I am describing features of an existing product at this early stage, these feature cards will morph into “outcome cards” when we move into full product development and evolve the existing features for the new platform.]

As I described what I had planned for the user research, I realized that I didn’t have a good container for all of my video highlights.  I checked with the Zoom and Grain folks along with other user research colleagues, but they were not aware of any standard tools that would do what I wanted.  However, all of them said that they generally used either Excel or Powerpoint to point to where the video highlight is stored.  Then they use either an Excel row to describe attributes of the video or use a Powerpoint slide.

Powerpoint it is.

In discussions with my colleague, we realized that he didn’t understand what I meant by faceted federated search.  I suspect that members of the client team might not understand the definition as well.  More importantly, I suspect that the client and my developer didn’t have a wide range of examples of faceted federated search tools.  I added doing demo highlights of a couple of faceted federated search tools I am familiar with like X1 and the new Google Pinpoint to my user research plan.

I came across this definition of faceted federated search:

“The idea is that if the content we are looking for is specific to a domain, then tagging the content with domain specific properties can help people narrow their search quickly.  For example in a corporate environment where a search is targeted at documents that have been written, some obvious facets to look for are: author, key words,  document type (PowerPoint, Excel, Word, Acrobat, HTML, etc), content source (L Drive, Website, Intranet, SharePoint, etc.), and last modified date.

“So what is federated search?  It is the idea that we want another search engine to find results in a separate corpus and return the results to us.  We will then surface the results to our users.”

I created a set of “feature cards” for the existing products and for a couple of other faceted search tools.  The image below is a slide of a feature card for a representative faceted federated search tool – Google Pinpoint.

In the title of the slide is a pointer to the baseline feature demo video highlight produced through Grain.  The screen image can either be of a single screen shot or a capture of where the video highlight is in the overall video of a user interaction.  On the right at the top are comments to make sure that the reviewer of the video pays attention to something specific.  At the right bottom, are user research highlights of how real users are working with that feature.  These highlights should be selected for at least one person using the feature as expected, and then the others should be for users having difficulty with the features.  As insights are generated, they are added to the “feature card.”

The benefit of a feature card is that the agile team developer doesn’t have to interpret or try to figure out what the a user story actually means in practice.  They can see the feature in use by real users.

While it is a bit tedious to create “feature cards”, the ease of creation of the highlights frees up the time sucks of working with video.  In a recent discussion with the Grain development team, I asked if they were going to be generating transcripts in Word, PDF or TXT formats.  They shared that it was on their roadmap and it was delivered as I was writing this blog post.  Through discussion, I realized what I really wanted was for them to produce a Powerpoint deck with one highlight per slide.  By generating the Powerpoint automatically I would have my starting point for my feature cards.  Then the user research task would be adding the comments and the related highlights and organizing the features.

While I was creating the feature cards, my colleague was doing a proof of concept prototype (the Prototype, Don’t Guess phase).  To show me his progress, he created a quick video (Show, Don’t Tell).

I was excited to share the video with the client team.  However, I realized I needed to explain why the UI in this Proof of Concept (POC) was so purposely ugly.

A key part of our early work on the Minimum Viable Product (MVP) to get direct user feedback is doing a Proof of Concept (POC) to see if the functionality we need is really there in the underlying platform.  We want to do this quickly and in a semi-throw away code manner.  Then we can quickly convert to cleaner and more supportable code for the actual MVP.

I shared that for most of the POC work and then with the MVP we will not be focused on making a nice looking UI.  In practice, the UI will be ugly and at low resolution and fidelity.  There are many reasons for this, but here are a few.

When I was at DEC in the 1980s we had a few very early laser printers.  I was quite excited to use this technology so that everything I did and distributed looked good.  George Metes, a Dartmouth English professor, who was head of our documentation group cured me of this practice.  He shared that if you want people to review your work for content, then find the ugliest line printer to print stuff out on.  Reviewers will know that this is not the final form and will concentrate on your content and the organization of your information, not the form of the presentation.  If you want reviewers to catch typos, then print things out on the laser printer.  Reviewers will assume you are getting close to final and will focus only on quick things, not the important content.

This kind of fidelity choice matters even more today when we have such good tools to do workable mockups of applications.  I’ve found the same thing occurs if you have a nice looking user interface.  People focus on the much less important things like font type, font size, color of the text and graphics etc.  They will almost always ignore the actual features, functionality and benefits to the user.  Professor David Socha and I talked about this in our first software design paper:

“When designing software, on the other hand, Dorst describes a different management process:

“If you look at web design, for instance, you would see quite a different pattern. In developing a website or an interactive system for a computer, you work on designs that are easy to replicate, and that will be used by means of the same medium on which they are made. So you have a realistic `prototype’ at almost any moment during the design process. You can do user testing at all times. Designing then changes from a linear process which leads to a prototype, into a process of continuous testing and learning. Design becomes an evolutionary process; you are able to test many generations of the design before delivery.

“Evolutionary development is wonderful: the earlier you can incorporate user knowledge into the design, the better. Unfortunately, in practice it turns out that these evolutionary processes are even harder to manage than `normal’ design projects. How do you decide on the number of generations you will need, for instance? This way of working also has its own pathology, the results of which are all too familiar: the debugging drama. Software designers are tempted to `just make something’ and then to improve that imperfect concept over many generations. But if you begin the evolutionary process at a level which is too detailed, you end up debugging a structurally bad design, ultimately creating a weak and unstable monster.”

“The evolutionary design process described by Dorst also has another challenge: getting the right level of feedback from the client and the user. This contrasts with hard physical product design where significant effort is expended in making a realistic prototype. Because software designs look so usable at an early stage, the users want to jump right into using the design and the result is feedback that is at the myopic level, not at the reflective and systemic level.

“A technique for getting better feedback at this early level is to change the resolution or fidelity of the design. Paul Souza, while at Adobe Corporation, developed a technique of `animating’ pencil sketches. Instead of a polished user interface with a set of actions and data models developed underneath, he would scan a pencil drawing into the computer and assign hot spots to the drawing in order to call a function. With a `polished’ user interface the only kind of feedback he would get would be on the font and the colors and layout of the interface (convergent detailed feedback). With the pencil sketch interface on top of the actions and data model, he would get conceptual feedback about the intent of the tool and how the tool might be used to better the organization’s goals (divergent and generative feedback). Also, by lowering the fidelity of the user interface, he reduced the demand to prematurely start using the design before a robust architecture could be formulated.”

This early stage of the client work is to find our way quickly to a robust architecture on the underlying platform.  We know that the UI changes can be implanted relatively quickly, but they will also be iterative as Dorst describes above as we get user research and usability feedback.

By creating relevant video in the “Observe, Don’t Ask” stage and then quickly creating video snippets for “Show, Don’t Tell” the agile development team can move into the “Prototype, Don’t Guess” phase more productively.  Further, the video highlights are indexed and organized by feature rather than buried in longer videos that no software developer would ever wade through.  Having a rich context of what a feature is supposed to be and then multiple examples of the feature in use helps the developer both to figure out what is needed and accelerate the process of providing more innovative solutions (better, faster, cheaper).

Observe, Don’t Ask.  Show, Don’t Tell.  Prototype, Don’t Guess.  Act, Don’t Delay.

    • Part 1   Observe, Don’t Ask.  Show, Don’t Tell
    • Part 2   Where does “Observe, Don’t Ask” show up in software product development?
    • Part 3   The OODA Loop
    • Part 4   Orient, Evaluate and Prototype
    • Part 5   Video Highlights for Show, Don’t Tell
    • Part 6:  Show the software, don’t try to describe it
This entry was posted in Content with Context, Design, Learning, Patterns, Product, User Experience, Video, Wake Up!. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s