Day 276 of Self Quarantine Covid 19 Deaths in U.S.: 298,000 GA Vote!!
Several times a year I meet with members of the University of Washington at Bothell Computing and Software Systems (CSS) Technical Advisory Board. Due to Covid, these meetings are now on Zoom.
I enjoy the in person meetings primarily for the break periods to talk to other members of the board about their software development experiences in their companies. The informal process of an in person meeting doesn’t translate well to a Zoom meeting unless you already know the other participants.
In a previous post (Design of an Experience meets OODA) I described how I am learning to use Zoom and Otter.ai to capture how to design better online experiences while also grabbing video ethnography artifacts for future research. This process works well when I am hosting the Zoom meeting and using Grain. Until this meeting I had not figured out how to do similar artifact capture when someone else is hosting the Zoom.
Otter.ai recently announced how their tool can provide real time transcripts of a meeting. The hidden benefit of the new capability is that you can make annotations of the Zoom stream in real time. One of the first talks was from Mark Kochanski about what the professors are learning about remote instruction during the age of the pandemic. I was positive Mark was going to share something I needed to remember.
I felt comfortable testing the tool out during this meeting as the organizers announced the meeting was being recorded.
I first tried the highlighting tool. With a simple flick of my computer mouse, I was able to highlight the text that I wanted to note and talk to Professor David Socha about later.
The highlight tool popped up and also suggest several other things I could do like insert a photo. I quickly tried that.
While it is a little cumbersome to do, I mostly take screen shots of the people or slides I want to study a little more. The tool is pretty quick to add the photo if it is in file form.
Next, I added a comment to remind me to chat with David Socha about how Mark’s learning might impact the paper we are working on.
I was pleasantly surprised to see that I could have two apps simultaneously using the microphone (Zoom and Otter.ai). As I was in listening mode, the real time text was not that distracting. In fact, the ability to quickly highlight along with making comments (notes) meant that I could focus on what was important to me from the conversation.
In Otter.ai, I couldn’t seem to label the speaker in real time. I had to wait until the transcript was fully processed to go back and add the speaker names.
Later in the day I tried a similar real time captioning capability with Microsoft Teams and found the real time caption feature quite irritating while trying to carry on a conversation with a colleague, because the captioning was in the same window as the video.
Slowly but surely the video technology and content analytics (speech to text) are allowing the quick creation and annotation of video. Working with video now is as easy as it used to be taking notes by hand in a Moleskine notebook. However, now I have the full context of the conversation AND the ability to search the conversation in real time and in the future.
I really like what Otter.ai, Grain.co, and Descript are doing to use the speech to text transcript as way to find and highlight text and video to share with other colleagues. I particularly like that I can share what they had to say in their voice and with their enthusiasm rather than just dry text.
Thank you to the development team at Otter.ai for the real time meeting notes.