I have spent the last few years looking closely at how we can design for presence, transformation, and more complex emotions. Our evolving relationship with intelligent technology is one of the most dynamic areas of this field. A recent book picks up this discussion: The Digital Mindset (Leonardi, Neely: Harvard Business Review Press, 2022; Engadget summarized). The authors do a fine job arguing why we should not treat smart technology like a person. Despite the anthropomorphism creeping in with cutely named digital assistants offering late night hotel-bar conversations with the accent of your choice, we all know it will quickly disappoint, cost more than we think and never really lead anywhere. The main reason: it is not human, and will never be. The latter statement is not without controversy and requires plenty of context, but yes: I do agree with that.
However, the book presents us with only one alternative option to this human-like relationship: to treat AI like a tool. This does work better, and just like the relationship with, say, a hammer, is unlikely to disappoint: precisely because it lacks ambition. But intelligent technology is ubiquitous and mediates a lion’s share of our relationships with others and the world around us. In a Marshall McLuhan kind of way, it is the world around us. “Tool” is all too blunt a metaphor and closes possibilities in the narrative on both sides: how we build — and how we as users approach — intelligent technology. In exploring our world and ourselves in it, we need more ambition, not less of it.
So, how do we design smart technology to set up a better relationship with us? Well, we can’t avoid ascribing personality to something that acts intelligently and speaks to us. And we do need technology to deliver in a predictable and “tool-like” manner. But instead of getting stuck in this false dichotomy, I propose an approach based on story-telling — a framework that is inherently born out of this juxtaposition — to create meaning from opposites. To design for such experiences, we need to deliberately target two fundamental aspects of narrative that are largely absent in technology design today: meaningful surprise, and liminality.
For a story to work and be meaningful, it needs to be both inevitable and surprising — at the same time (Aristotle, Poetics). If it is just surprising, it is merely random and the combined narrative elements lack coherent relevancy. And, if it is just inevitable, it is simply causal and predicable and we have no incentive to stick around and see what happens next. This impossible yet necessary juxtaposition of story-structure of course mirrors that of person and tool. I say “impossible”, and outside story it is. Yet, all stories told naturally solve to this and we fully expect it to be part of any book, film, play — or any designed experience for that matter — that we consider engaging with.
When we approach something explicitly presented as art or fiction, it always includes data, information and select parts of reality. But not all of it. Narratives and stories are deliberately incomplete. What the fictional context does is re-contextualizing what we see to question what it means. Why is some information presented, and why is some excluded? And more importantly, the interpretation is open — not only as question of what it means, but specifically what it means to us.
It is by engaging with the incomplete narrative and filling in the empty space between these select data points with our own narrative — our hopes, fears and dreams — that we make a book, film or painting come alive and truly matter to us. Why do we care when the fictional hero is in danger? Because the hero is a proxy for ourselves. Personal assistants would benefit from being designed more like fictional characters. It would open up additional possibilities for exploring, not just receiving data we have asked for. One could argue that they already fit the bill as unreal, albeit being insufferably boring ones. But they were not designed or presented as fiction, and therefore we don’t approach them as such.
The fictional label attached to an object or text sets up a narrative relationship, clearly distinguishing it from our regular reality. It creates a “liminal space”: a space in between what is and what might be. A good book or film delivers this through suspension of disbelief. Play does the same. And it is inside this narrative distinction that we, for a moment, are unburdened by pre-existing notions of reality and thus can play with who we are and what our reality means. Liminality is what gives narrative exploration space to exist.
In fiction, we know that what we experience is untrue, but still have very real emotions in response to it. We might even have big, uncomfortable and complicated emotions in response to a rich story. And as we return to our regular reality after this momentary exploration, some of what was felt in fiction firsthand stays with us. We likely feel a need to accommodate this felt contradiction. It can transform us. The contradiction between real and unreal is creatively bridged. This dynamic, however, is excluded if we approach something as a tool. When something is assumed — and established — as a practical part of our permanent reality, it lacks liminality. Complex emotions and the sublime will struggle to find space as a consequence.
A fictional context, because it was created by someone, also has us search for sentient intent: the why. Why is this presented to me? Why was the curation done this way? Why was this included, and that excluded — and what does that mean? This sets up a healthy relationship where the curation that does happen on the backend of any conversational interface is brought to the foreground as a choice, not a mechanical transaction. Without that feeling of intent — and the agency of choosing what it means to us — we are left with a lukewarm prefrontal cortex soup where corporate interests can roam rather unchecked, disguised as transactional infrastructure.
“But I just want to know what the weather will be tomorrow…” Exactly! Tomorrow will be 12 degrees with a 40 percent chance of rain. For that we don’t need a digital persona any more than we need a meteorologist to adjust our home thermostat. Especially not a cute meteorologist who also wants to move in and help us to select and mediate so much more — perhaps all — of how we experience, structure and understand a significant part of our existence.
Creating clearer distinctions around what should be mere data retrieval and what are experiences can be meaningful and interesting is key to designing better relationships. It will also tell us where liminality is required, and where the tool simply needs to be sharper. What we do know will apply generally: we need more types of meteorologists and we do need to make them better storytellers. And we should definitely be careful handing them too many sharp tools.
This is ultimately a narrative issue: how do we design the context and shape the attitude with which we approach something and look for meaning? Such a framework and a practical design process is discussed in a separate paper on “Liminal Design” (Liedgren, Gaggioli, 2022, currently in review). Had we asked someone 20 years ago what they hoped intelligent technology would bring humanity by the time of this article, a survey of today’s technological landscape would be sure to disappoint. Reapproaching product development through the lens of narrative, fiction and Liminal Design opens that third door that can help us reimagine our technology goals and find new ways to design what might create and hold a wider array of human emotions. This will include everyday spaces where we are not just spoon-fed information, but sincerely exploring the world and ourselves. And in so doing, we will fundamentally challenge the transactional and commercial nature of today’s intelligent products and UX design through the rich questions that a story approach so naturally asks.
Johan Liedgren
Award-winning film-director, writer and story consultant working with media and technology companies on narrative strategy. http://www.liedgren.com / https://medium.com/@johan_liedgren