Enthusiasm and expectations for AI have never been higher, and we’re beginning to sound like a cult.
While the Age of AI might have been declared in 1956, recent releases in language algorithms have shaken the public’s understanding of its potential. We’ve been interacting with highly complex data solutions for years. Data science underlies how we interact with each other, how we spend our time and money, how we find love, and even how we plan for our health. But there’s no denying that in the last few weeks, there has been a seismic shift in the zeitgeist when even local news stations in small-town America are covering ChatGPT.
What’s changed is that we’re now able to solve problems that previously seemed impossible. Complex tasks like classifying images with a single sentence to creating unique works of art from text seem trivial compared to the user experience demonstrated through recent AI solution releases.
If we were already in the age of AI, we’ve now hit warpspeed. Bill Gates described challenging the team at OpenAI in mid-2022 to create an AI that could pass an AP-level biology exam. He expected a task of this difficulty to take them years. They finished it in months. After fathering one technological breakthrough with the graphical user interface in computing, it feels natural to trust his prediction that:
The enthusiasm and expectations for these technologies has never been higher. But the same principles still apply. Without user trust, even the most dazzling AI system will fizzle out. Think of how many of us use Alexa as a kitchen timer. We have to unravel the fanaticism to adequately design, develop, evaluate and integrate them into our daily lives.
In her book Cultish: The Language of Fanaticism, Amanda Montell describes how the power of language is key to manufacturing ideology, community and us/them attitudes we see in cult-like behavior. She outlines that cultish language accomplishes three things:
- It makes people feel unique while also feeling connected with others.
- It makes people feel dependent on a leader/group/product to the extent that life without them feels impossible.
- It “convinces people to act in ways that are completely in conflict with their former reality, ethics, and sense of self.”
In the first 5 days of ChatGPT’s release, more than 1 million users signed-up. The flood of headlines since signals that we’re experimenting with these tools, we’re connecting with others about our experiences, and we’re creating communities. We’re starting businesses, we’re integrating these technologies, and we’re championing their benefits and potential. We might very well look back one day and bifurcate time as pre- and post-ChatGPT in the same way Mr. Gates suggests we have done for the personal computer or the Internet.
This enthusiasm has been tempered with valid concerns about bad actors, ethics, and unrealistic expectations. But as Mr. Barnum put it, there’s no such thing as bad publicity. Even the startling use cases seem to have little impact on public excitement. In fact, dissenting opinions voicing concerns about these issues are shut down as innovation blockers. They simply can’t understand the potential of these technologies, or they’re worried about protecting inefficient and undervalued services. Cough, cough…AI will replace all writers…blah blah blah.
New Age Language and Our Cult-Like Obsessions
When it comes to our cult-like obsessions, New Age vernacular tends to go hand in hand. Montell describes how this way of speaking often says a lot more about the person who uses these words than the words’ actual meaning. Do any of these terms look familiar?
The words themselves don’t matter. What does matter is their “ability to instill a sense of us-versus-them elitism in follows who know how to use the language, while ostracizing or villainizing those who don’t”.
Under Montell’s criteria, we clearly have a cultish language phenomenon on our hands. New-age, cultish speak isn’t inherently bad but as Montell argues, it’s ubiquity could lead us down a path of enlightenment that we don’t want to go down.
The problem with cultish language and the prevalence of hype, in particular, is that when public expectations are inflated and the technology or science falls short, the public’s trust and enthusiasm is undermined.
AI does not replace human behavior; it augments it. We have to be balance accuracy and explainability to build user trust just like we would with any other solution. We should be very careful to not misrepresent these technologies or utilize language that conflates scientific discovery with marketing buzzwords.
Kristen Intemann argues that hype or exaggerations are inappropriate in scientific or technological communication when:
- The exaggeration thwarts the goals of scientific communication for the sake of driving up enthusiasm, i.e. helping the audience make well-grounded decisions about the benefits and risks.
- The exaggeration is not supported by evidence.
There is a tradeoff between accuracy and audience context that requires the right balance of technical language and accessibility to convey the significance of what’s being discussed. To fight hype, you have to do more than balance risks and benefits of a technology. The audience has to have reasonable expectations of the technology or science’s impact and limitations within the context of the goal they want to achieve. That is, you have to understand if you can reasonably trust it to do the thing you expect it to. Can you trust both the prediction and trust the entire solution?
When our expectations for AI don’t align with current research, we might find the solution doesn’t help us solve our problems but in fact creates more of them. It’s important to understand what has happened when we put our blind faith into these technologies so we don’t repeat these mistakes even when the hype is at all time highs.
The MITRE Network Partnership outlines how recent failures in AI highlight ongoing risks and impacts of AI not operating as intended.
Losing the Context
As you automate more inputs and decision-making, you risk losing the very context around which the solution operates. That means we might not always understand why or how the AI makes decisions. For example, Uber’s self-autonomous car required drivers to keep their hands on the wheel as the autopilot system could not navigate complex situations. According to the NTSB report following an accident where a pedestrian was killed, Uber did not enable the emergency breaking maneuvers to minimize erratic driving behaviors like slamming on the breaks for minor obstacles. When a pedestrian stepped in front of the car, the human backup driver should have braked rather than the car itself. The backup driver put too much faith in the car and Uber didn’t consider all safety implications by leaving this feature out.
Cognitive Drain
The low-hanging fruit will be picked first. As AI automates dull and monotonous tasks, humans will be left with more complex and difficult work. While there are clear pros and cons to this tradeoff, the risk could be that we become overloaded. This cognitive overload could actually increase the likelihood of making errors. When the stakes are high like in healthcare or terrorism monitoring, these errors could have serious impacts.
Automation can also take a physical toll as people are forced to maintain unrealistic standards. Amazon argues that they still maintain large warehouse workforces because robots are still unable to perform complex tasks like picking and sorting inventory that requires human dexterity. But in the same environment, the Human-AI coordination means that workers are tracked through heavily automated processes. Fear of slowing down for even bathroom breaks or meals forces workers to continue operating at full capacity in grueling conditions and often while injured.
Replacing the Human Too Soon
With all this hype, it makes sense that we have a tendency to overestimate the range and scale of problems we can tackle with AI. MITRE describes two common mindsets around AI:
- “Perfectionists” — These people expect AI to perform beyond what it can achieve.
- “Pixie Dusters” — These people believe AI to be more broadly applicable than its capabilities.
The cult-like obsession we’re developing over AI will undoubtedly cause more people to join either camp. As with scientific communication, misaligned expectations can lead to all out rejection of these technologies. Or at worst, perhaps we could see New Age language drive more people to draw comparisons between AI and conspiracy thought as we have with other examples of “conspirituality” (e.g. QAnon, anti-vaxxers).
The perfectionist mindset may also prevent adoption of these technologies as they set the bar for acceptability remarkably high. In many contexts, this bar should remain rightfully on the top rung. However, this narrow thinking doesn’t align with scientific exploration. We must explore, test and validate to understand the limits and appropriate uses for these tools.
In contrast, the pixie duster mindset rush to deploy AI as widely as possible. Think of all the get-rich with ChatGPT articles that have flooded your Medium feed, recently. Another example is when people take a fit-for-purpose AI model and deploy it for another purpose. This is called transfer learning, or taking a model trained for one task and applying it as an input for another task. It can certainly speed things up but if the new model doesn’t have the right inputs (data, equipment, training, governance, etc), it will fail.
There are limits between AI and it’s appropriate uses. Successful AI comes down to constructing an AI-friendly environment and striking the right balance in human-AI coordination. The increasing presence of AI technologies in our lives is already and will continue to impact how we think of ourselves, our interactions with others and our understanding of the world.
The name of the game has not changed. We have to make sure the user is remains the center of the solution and not get swept up in our cult-like obsession. When we hear New Age terms and AI in the same discussion, our red flags should be going off. We should take a step back and evaluate the limitations and appropriates uses of this technology. If we don’t we could be at risk of deploying solutions that at best cause annoyances and at worst could have consequential impact.
Our enthusiasm and optimism for these recent technological advances can’t be denied. But we can’t let the hype overshadow the responsibility these tools have to continuously demonstrate that users can trust them. Our willingness to integrate them into our lives should be founded on an understanding of their context and limitations. There is a balance between the perfectionist and the pixie-duster. Changing how we speak about these tools is the first step.