Day 76 of Self Quarantine Covid 19 Deaths in U.S.: 102,000
[NOTE: These numbers are staggering to me and life changing. My thoughts and prayers are with all of the families of the deceased. Going forward, each of my blog posts will update these numbers on the day that the post is first published.]
I love synchronicity.
I depend on synchronicity to focus my discovery and learning activities.
When a new topic randomly crosses my path from three different sources or colleagues, I know it is time to pay attention. Two weeks ago, the existence of Jupyter notebooks showed up from three different colleagues. They were excited about the ability to combine narrative and computing. Well, this is novel (double entendre intended). These are two words (narrative and computing) that I never would put together. These two words use different tools to create and think with. These two words require different thinking modes. They just don’t go together. It’s like Digital Humanities.
The 2015 Jupyter Grant Proposal introduction describes the creators’ aspirations:
“Computers are good at consuming, producing and processing data. Humans, on the other hand, process the world through narratives. Thus, in order for data, and the computations that process and visualize that data, to be useful for humans, they must be embedded into a narrative – a computational narrative – that tells a story for a particular audience and context. There are three fundamental aspects of these computational narratives that frame the problem we seek to address.
“First, a single computational narrative needs to span a wide range of contexts and audiences. For example, a biomedical researcher might originally perform statistical analyses and visualizations for a highly technical paper to be published in an academic journal. Over time, however, that same individual will give talks to other researchers, or even non-technical audiences. Eventually, it may even be important to enable non-coding lab scientists to perform that same statistical analyses and visualizations on data from new samples using a simplified graphical user interface. Across all of these different audiences and contexts, core aspects of the computational narrative remain invariant.
“Second, these computational narratives need to be reproducible. That is, other people – including the same scientist six months later – need to be able to understand exactly what was done (code, data and narrative) and be able to reliably reproduce the work in order to build new ideas off it. Reproducibility has long been one of the foundations of the scientific method, but the rise of data science brings new challenges to scientific reproducibility, while simultaneously extending these questions to other domains like policy making, government or journalism.
“Third, computational narratives are created in collaboration. Multiple individuals need the ability to work together at the same time, on the code, data and narrative. Collaboration is present in nearly all contexts where computational narratives are created: between two postdocs and a professor in the same research group; between the writers, editors and visual designers of an online news site; between the data scientists and business strategists at a large internet company; or between a teacher and students in a university classroom.
“Given this background, the core problem we are trying to solve is the collaborative creation of reproducible computational narratives that can be used across a wide range of audiences and contexts. We propose to accomplish this through Project Jupyter (formerly IPython), a set of open-source software tools for interactive and exploratory computing. These software projects support scientific computing and data science across a wide range of programming languages (Python, Julia, R, etc.) and already provide basic reproducibility and collaboration features. This grant aims at making major progress atop this foundation. The main application offered by Project Jupyter is the Jupyter Notebook, a web-based interactive computing platform that allows users to author computational narratives that combine live code, equations, narrative text, interactive user interfaces and other rich media. These documents provide a complete record of a computation that can be converted to a number of formats (HTML, PDF, etc.) and shared with others through email, Dropbox, GitHub, etc. They can also be published online thanks to our Jupyter Notebook Viewer, a free service we operate that allows anyone on the web to view a notebook as a regular web page.”
As I am immersing myself in this new world, everyone keeps saying “well this works for programmers and scientist programmers and maybe you still remember how to program a little, but it will never work for anyone else.” Then I remember my encounter with Kate Hayles at Duke University who shared how she teaches her Freshmen English classes. She uses software for their comparative literature studies:
Then, Kate hit me with the real paradigm shift here. Along with comparing “texts” across different media, she is using literary critique skills to critique code. She described this emerging field of critical code studies. I wasn’t sure I had really heard what she just said so I asked for a specific example.
Kate explained “We are now as interested in critiquing the software as we are in critiquing the text. There are several efforts under way to have side by side displays of the ‘digital text’ and the software that implements the digital text.” Now I knew that I had just fallen down Lewis Carroll’s Alice’s Adventures in Wonderland rabbit hole.
“Let me see if I understand this right,” I asked. “You mean to tell me that Humanities students are both interested in software and have the ability to critique and write software in an humanities course?”
Kate looked at me a bit like I was a Freshman student, and patiently explained “of course, this current generation is interested in software. This is the digital native generation and they are eager to do the software explorations. They are frustrated with those of us from the old school who only want to focus on print.”
“Let me try one more time. There are not any humanities majors I know (including one of my children) who have the least bit of interest in computing. They chose the humanities so they could stay away from science, math, and computation,” I asserted.
Kate just smiled and suggested that I ought to sit in on one of her classes where they do exactly what she is describing – study comparative literature by creating and critiquing software. Kate said that given this turn in the conversation she would send along a couple more chapters from her latest book.
The narrative and computing ideas crashing together means it is time to call my narrative colleague, David Robinson. I provide a little context and ask him a few questions. The intertwingling of narrative, computing and data leads me to two questions:
- How do you make a narrative computable?
- How do you make computing a narrative?
The Jupyter technology has answered these questions at least at a minimal level, but what I was really trying to get at is what is the meaning of those two questions.
What does it mean to make narrative computable?
I take a quick look at the video and scratch my head. I think David has been in Wisconsin too long. I don’t get the connection to my questions.
On the morning of the Zooming, I re-watch the TED video. Scott shares how comics have evolved through different media over the last 3000 years and are now entering the digital age. Then it hits me. I have to go re-read all of Marshall McLuhan and Lev Manovitch and Sergei Eisenstein‘s books. If I am going to put narrative and computing together, I have to understand the combination as a new medium. This new medium is heavily dependent on a database structure underneath.
From Scott McCloud’s TED talk:
“Dad understood the shape of the future. So did J.C.R. Licklider and his notions for computer-human interaction. Same thing: he understood the shape of the future, even though it was something that would only be implemented by people much later. Or Paul Baran, and his vision for packet switching. Hardly anybody listened to him in his day. Or even the people who actually pulled it off, the people at Bolt, Beranek and Newman in Boston, who just would sketch out these structures of what would eventually become a worldwide network, and sketching things on the back of napkins and on note papers
“So, three types of vision, right?
- Vision based on what one cannot see, the vision of that unseen and
- The vision of that which has already been proven or can be
- And this third kind, a vision of something which can be, which may be, based on knowledge but is, as yet, unproven.
“Now, we’ve seen a lot of examples of people who are pursuing that sort of vision in science, but I think it’s also true in the arts, it’s true in politics, it’s even true in personal endeavors.
“What it comes down to, really, is four basic principles:
- learn from everyone;
- follow no one;
- watch for patterns; and
- work like hell.
“I think these are the four principles that go into this. And it’s that third one, especially, where visions of the future begin to manifest themselves. What’s interesting is that this particular way of looking at the world
“Why is this important? I think this is important because media — all media — provide us a window back into our world. Now, it could be that motion pictures and eventually, virtual reality, or something equivalent to it, some sort of immersive display, is going to provide us with our most efficient escape from the world that we’re in. That’s why most people turn to storytelling, to escape. But media provides us with a window back into the world we live in. And when media evolve so that the identity of the media becomes increasingly unique — because what you’re looking at is comics cubed, you’re looking at comics that are more comics-like than they’ve ever been before — when that happens, you provide people with multiple ways of reentering the world through different windows. And when you do that, it allows them to triangulate the world they live in and see its shape. That’s why I think this is important.”
I send a note to David before our conversation:
I thank you.
I curse you loudly.
Really. Very loudly.
I thank you and affirm you for your oblique coaching.
I curse you loudly. Did I say that?
Watched McCloud again.
Now I lose my next month going back through McLuhan and Lev Manovich and probably a little Eisenstein.
Damn you David Robinson.
So is this what you are trying to coach me on?
Narrative and computing (aka literate programming) is a new medium.
The rules for this medium will be different than the old media of story or computing or big data.
What kind of medium will this literate programming media be – hot or cold?
What is the intent of the new medium? Is there more than one when you start combining the ideas in the document?
- Narrative and Code
- Narrative and Code and Big Data
- Computing on the Narrative (like Attenex Patterns did with terabytes of documents)
- Narrative and Code and Interactive Visualizations
- Narrative and code as text books
- Narrative and code as scientific notebook or patent notebook
What happens when you “break the page” and you have an infinite canvas?
What does it mean to shape the future with literate computing?
Thanks for starting the conversation with a bang.
As David and I explore narrative and computing and how I want to create in this new media, I am reminded of Frank Gehry and how he had to innovate with technology to get his architectural designs built:
Frank Gehry had to transform himself from an architect to a builder and then operator of the buildings that he designed as he was unable to get his designs built. From “Is Designing Software Different from Designing Other Things?“, we catch a glimpse of how Gehry had to change his theories of design:
“In a more complex example, Frank Gehry in a video, at a Technology, Education and Design (TED) Conference put on by Richard Saul Wurman, described his challenges in creating the kind of public building designs such as the Guggenheim Museum in Bilbao, Spain, the Experience Music Project in Seattle, and the Disney Concert Hall in Los Angeles. When he first started exploring complex curved shapes for the exterior of buildings he was startled to discover that when he put his designs out to construction bid, the contractors quoted him five times the normal fees. He realized that no one knew how to build his creations. So he had to form a company to first adapt Computer Aided Design (CAD) tools to design the complex metal shapes, and then develop the software that would connect his CAD tools with CNC equipment to cut and mill the complex metal shapes. The end result was that he was able to build his distinctive creations for the same cost as traditional construction methods. During his presentation he reflected on whether he was now a building architect or a software designer.
“These changes are causing the field of architecture to look more like the field of software design. Lindsey details the extent to which computer systems and particularly the Dassault CATIA CAD system have entered Gehry’s practice of architecture. The computer is used for simulations of the digital and physical models, direct detailing, computer aided manufacturing, coordination of the electrical, mechanical and plumbing systems, and as a framework for the operation of the building after construction. Gehry describes how his evolving process is changing the craft of building design and construction:
“This technology provides a way for me to get closer to the craft. In the past, there were many layers between my rough sketch and the final building, and the feeling of the design could get lost before it reached the craftsman. It feels like I’ve been speaking a foreign language, and now, all of a sudden, the craftsman understands me. In this case, the computer is not dehumanizing; it’s an interpreter.”
The significance of the changes that Gehry has made in his fluent design process shows up in the organizational interventions that the software is bringing to the building industry as described in Digital Gehry:
“Ultimately, allowing for all communications to involve only digital information, the model could signal a significant reduction in drawing sets, shop drawings, and specifications. This is already reflected in the office’s current practices where the CATIA model generally takes precedence (legal as well as in practice) over the construction document set. This is a significant change in standard practice where specifications take precedence over drawings and specified dimensions are subject to site verification. . . . . Glymph states that `both time and money can be eliminated from the construction process by shifting the design responsibility forward’. Along with this responsibility comes increased liability. When the architect supplies a model that is shared, and becomes the single source of information, the distributed liability of current architectural practice is changed.”
“Building on the experience of Gehry, we see that this combined hard and soft design can shift forward into the area of operating a building as well. One software system can act as a shared repository and information refinery for the design, build, distribute, intervene and, now, the operate phase knowledge base.”
David and I quickly realize this is the start of many conversations. I am still confused on how narrative and computing relate to each other. But now I am confused at a higher level.