Artificial Intelligence: the soul of soulless conditions

A slide from a presentation on computers, reading: “A COMPUTER CAN NEVER BE HELD ACCOUNTABLE; THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION”

What today we call Artificial Intelligence is the soul of soulless conditions: a simulation of what is missing from the world of computers, papered over the top of everything that is wrong. I’d like to characterize this problem and its origin, and make some recommendations for reform.

The following is based on a simple premise: to the extent that computers are different from us and cannot themselves adapt, we have to adapt them or adapt to them. And it’s more complex than we think: Jef Raskin said that “An interface is humane if it is responsive to human needs and considerate of human frailties.” — bravo, indeed, but I would adapt this to add that computers don’t even yet allow for the majesty of humanity, let alone our frailties.

Moreover, the pseudo-human AI interfaces are mere fig leaves: Clippy, Alexa, and the “chat” style interface of things like Chat GPT have the effect of papering over the inadequacies of the underlying system. We don’t need Clippy and never have, and we wouldn’t have thought of inventing it if the underlying system were one that actually appeals to our humanity, one that is therefore clear in its function and its ramifications.

A screenshot of Clippy, the personified paperclip help assistant from various Microsoft office products.

This is the fretful dance that we undertake with technology: crushing our ideas into tables where they don’t fit, “optimizing” our articles with bizarre written styles like keyword optimization and clickbait headlines — all either to appeal to computers or to humans in the nasty cattle-pens of information into which computers crush us.

This is to say that information technology, while bringing wonderful benefits, has caused a pathology in our thinking, which I call computosis: and we require more and more elaborate diversions to cover up the issues.

The top layer, the interface, is a gurning mask covering the information layer, which is the crushing, annoying, inhuman construct that I will elaborate below. The top layer can never completely correct for what’s underneath, so any human-feeling contact with a machine is in fact sugared dehumanization, until we reform our systems.

My main claims are, therefore, these:

  1. What we today call AI is a set of powerful technologies, and this power, energy and investment is wasted given that our information systems can’t yet actually express human ideas.
  2. Therefore, AI unleashed in this world will deliver us malformed results and will maximize the existing social and informational distortions already caused by problem 1.
  3. More generally, AI is a distraction: we should be building tools that increase the scope and leverage of human action, rather than embracing AIs that introduce action that destroys any accountability and responsibility.

First, some throat-clearing is necessary given that AI is in the public conversation at the moment, and therefore much is said; I must first therefore address: 1. what AI actually is and 2. the general ethical risks associated with it.

What is called AI is not artificial intelligence

The Oxford English Dictionary defines intelligence as “The action or fact of mentally apprehending something; understanding, knowledge, comprehension (of something).” Nothing termed “AI” today comes anywhere close to comprehension or understanding.***

Moreover, AI isn’t even artificial. Artificial means something created by humanity as opposed to nature. The adaptive aspects of modern machine learning systems are in fact mass-harvested and processed human insights and intelligence. Calling artificial intelligence “artificial” is like calling a beefburger “vegetarian” because it is made out of meat from many carcasses, minced, mixed, and seasoned.

To avoid being too self-righteous, I use the term AI because it’s in common usage: I might choose this as a hill on which to die but haven’t done so yet. Moreover, terming these systems intelligence appears to be more about our fantasy of creating new life (or even gods, see below) without recourse to procreation, than any helpful description of what these systems do.

AI is socially dangerous

I do not object to what we call AI in and of itself, but think that in its present form it is immensely dangerous. There are excellent criticisms, but I will give you just three, chosen hopefully because they might be a little new to you:

AI is a siren server

Drawing heavily on Jaron Lanier, AI represents in its current form the zenith of his “siren server” concept: a siren server is one that has some processing power advantage over other computing systems, commuted into an informational and thence profit advantage. This advantage causes it to acquire more users and hence more data, causing a cycle of growth — the siren server’s advantage may once have been technological, but eventually the advantage is simply one of self-perpetuating scale.

Siren servers take data that they tell us has no value and with their scale make it into something of colossal profitability. AI is the perfection of this: it requires eye-watering quantities of data, much of it derived for free, often without us knowing; in some horrible cases, as Lanier notes, those from whom the AI learns are told they are obsolete, like the amateur translators whose work the AI mines.

This is exploitation on a colossal scale, delivering massive benefits to a small group of users and a few corporations, dressed up as progress.

The medium is the message

There’s an immense interface problem: I’ve always disagreed with conversational inputs for computers, and especially with so-called personalities or identities. Even as an eight-year-old I thought Clippy crass and insulting.

Remember that the medium is the message: by which I mean that chat-style interfaces (the medium) force us into interacting with machines like they’re people, and that this is the message. The message is that the machine is alive: this lie is dehumanizing and deranging. I hope, dear reader, that it is painful for you to talk to a chat interface like it is conscious; this pain is your consciousness rebelling against this dehumanizing premise.

“Talk” to a computer? No. I think computers are wonderful, but it is a travesty to talk to them while our fellow humans are staved and degraded. If the pain ever stops, this is when you’ve subconsciously accepted not that AI is alive but that you are a machine, and can be made to act out the absurd without complaint in order to proceed.

Humanity is becoming boring

Easy access to information has delivered wonderful benefits to humanity, but one of the drawbacks has been the middling of human creative output. Any research that one wishes to do today is hampered by encountering endless generic, easy-going, noncommittal rewrites of previous freely-accessible “articles.”

AI will make this much worse, simply because it can do this sort of regurgitation automatically. (To be sure, I’m not saying that it’s impossible for AI to do good work, but what good is it if that work is buried under a mountain of mediocrity, that can be created faster and more easily?)

I recommend a change of focus

The final piece of throat-clearing is this: I’m about to recommend a change of focus, and I know that this sort of thing can be incredibly boring. Often one really wishes to explore the merits of a particular field, and not get derailed by someone else’s pet idea.

This is of course precisely what I’m about to do, but hopefully with a little more subtlety because, 1. My claim is that AI is in fact quite interesting, but that it is currently malformed; 2. I hope to show that some of the tasks for which we use AI would actually disappear if we had an information system that is fit for purpose.

All computer endeavors today are misdirected to the extent that we still lack a proper information system that can effectively describe human ideas. By “information system” I mean a system that manages information, its structure and the relationships within it. All progress made with respect to what we call AI will be at best a mockery of what it could be if we had a real information system, and at anything less will exacerbate and entrench our problems.

A photograph of a large, complex panel on an IBM mainframe.

There is of course the response: “Well, the information systems we have now can’t be all that bad, they’ve won out, right?” Not quite: what we have now does not need to be the best or even be objectively good, it just needs to deliver some benefit versus the time and energy invested. Indeed, computing history has shown the extent to which better ideas can be held back immensely by various forces — markets, bureaucracy, philosophy, etc.*

How should computer systems be?

What, therefore, should a computer system be like? What should a system possess in order for it to be worth using what is currently termed AI? My claim is that it should be ideisomorphic. This is a term that I coined, meaning that it should be shaped like (isomorphic, a mathematical term describing a relationship between two systems where each element can be mapped to an element in the other) human thinking (ide, meaning idea).

In less mathematical terms, I think that our tools of computing, and especially communication, should make it natural and easy for humans to express their ideas in their own terms, and should preserve everything about computers that are actually useful, e.g. their reliability, tirelessness in doing repetitive tasks, etc.

One can say, therefore, that the ideisomorphism of a given system can be described as the amount of information/nuance preserved rather than lost when people attempt, generally, to store ideas within it.

What does it mean to be ideisomorphic in practice, therefore? My approach is to compare the structure of human cognition to current computer systems: Human thinking is concurrent, and features the combination and connection of any concepts in any number, in any structure and style.

Computing, meanwhile, is usually trapped in one of several prisons:

  1. The tabular approach, like spreadsheets and relational (most) databases, which squash information into regular columns and rows — you’re damned if you need another dimension (say three instead of two) or want to connect some set if items that aren’t next to each other in your table
  2. Tree-structured/hierarchical, where everything must belong to some parent, everything ultimately wrapping up into a capo dei capi: some systems in human culture take this shape (such as armies) but most are not: social networks, food webs, citation networks, etc.

These structures (hierarchies and tables) are with us because they come naturally to computers. They belong at the dawn of computing, not today.

Hyperstructure

What we need today is what I call “hyperstructure”, for which the premises are incredibly simple:

  1. We should be able to create things, and any unique thing should be uniquely addressable
  2. We should be able to freely associate things, without limitation, by creating a type of thing that can refer to one or more other things.

Given these two very simple premises, we can derive what I call Hyperstructure Hypermedia (HSM):

  1. A system of nodes, which are chunks of information such as text, URLs, etc.
  2. A system of organization for the above nodes and for the tools of organization themselves:
  3. Sets (which collect things), represented on screen as ellipses engulfing what they contain
  4. Links (which connect things), represented on screen as lines between what they connect, with arrowheads to indicate direction, if necessary
  5. These units of organization can arrange the items without restriction: one can have links between links and sets, sets of sets and links, etc. etc.

There are a number of significant benefits to organizing things thus:

  1. It is generative enough to recover most information management functions in use today: one can build a table of any number of dimensions using sets for columns, rows, etc. and can build trees/hierarchies from nested sets or links. HSM means that:
  2. One can access a single data structure/database for all one’s data, rather than being frustrated by separate incompatible data structures for individual applications
  3. One can build a single application on this substrate, which itself combines any number of information-management functions currently separated between software products
  4. Moreover, it actually represents and can actually express human thinking: webs, graphs, arbitrary nesting, connections among not just individual things but between sets of things: a subtle tapestry of inter-penetrating information. One could view any information set: a field of academic literature, the usage of themes within a musical work, the World Wide Web, and visibly inspect the links and re-usage of material between each item.
  5. It makes it trivially easy to reference and re-purpose existing work. Today, one copies and pastes the old into the new and (if one is polite) links back. Ted Nelson, inventor of hypertext, imagined “transclusions” actual pieces of existing material included in current material, connected to its original context — this is trivially easy to implement in HSM: a document is merely a construct made of the aforementioned sets, links and nodes (all uniquely identified)—one need merely incorporate existing material into the document structure to achieve a transclusion.

Massive power, applied absurdly

My first issue with non-ideisomorphic AI applied is that it applies the immense force of data and computing power of AI in a flawed and boring way: forcing subtle human ideas into tables and hierarchies destroys them; AI represents huge leverage applied in pursuit of broken, malformed ideas.

There are of course examples in which AI might be able to provide the muscle to repair the connections that were destroyed previously, but this is one hand returning what had been taken previously by the other.

The rigidity of our current system of computing has even narrowed our own thinking: people seem to think it necessary that information be hierarchical or tabular in nature — applying AI to this malaise will further entrench these beliefs, especially for those who grow up “talking” to AI.

I want to live in a world where we actually incentivize daring, original ideas, expressed via an information system that allows for creative and properly-identified reuse and reference, and which has the fidelity to truly represent our ideas.

More pathologies

Secondly, I fear that AI will exacerbate existing pathologies caused by technology. For example, the modern Web has birthed bizarre spectacles that don’t need to exist:

In Ted Nelson’s original vision for hypertext, he imagined links that were real connections between pieces of information, that would allow the user to see the visible relationships between items wherever they looked from. Tim Berners-Lee’s World Wide Web gave us links that are visible only from the page where they reside and that are elsewhere, including on the destination page, invisible.

This destruction of information forever placed the questions: a. “How does a single piece of information fit into the whole?” and b. “What is the overall structure of all information?” out of bounds. I think that these are two of the most interesting questions that one can ask, and insist that we build information systems that can answer.

But Google sprang up to solve this problem: its search engine is powered by some approximation of the overall structure of online content. However, instead of this structure simply being known, Google must crawl as much of the entire Web as it can, as frequently as possible and derive the structure from this raw data. This is a travesty because it is out-of-date by definition, necessarily irrelevant, and foremost because it is a supreme siren-server: Google’s index is built from all our websites, which it crawls; you may not, however, crawl Google.

Google therefore causes a set of nasty and depressing tendencies online: people are rewarded with “rankings” in Google for creating any content of marginal quality; since (as referenced above) there is no robust means of reusing or re-referencing existing content, we instead copy-paste or rewrite material, leading to the modern overabundance of half-baked, mediocre gruel, of which the original copied or re-written content is almost never properly identified.

AI, put into this milieu, will 1. multiply the quantity of the gruel by orders of magnitude (by making it easier to create) and will 2. learn from the most common (and weakest) sorts of content, making for an immensely depressing vision of the future.

Provenance and originality will be destroyed, and there appears to be little we can do to stop it: the major AI companies have placed ethical safeguards into their systems, but they won’t be the only people creating AI. With AIs in the wild without consciences to prevent them from aiding the middling of civilization, the only viable choice seems to be to change the fundamental structure of computing into something less gameable.

A computer can never be held accountable

I came across the image at the start of this article on Twitter, I can’t remember under what circumstances. The poster claims that they were an IBM employee and that this is a slide from a company presentation. I believe them, but even if the story is fake, this is still a gripping statement.**

Without even broaching whether it’s possible for AI to feel remorse, there is no way to petition AI today or to bring it to trial: if a company makes a duff ad or a bad product, I can at least email or call to complain; a company can keep an AI completely insulated from any public pressure and counter-argument against whatever direction it is on.

This state of affairs destroys any sense of responsibility for work: I can’t complain to the AI, while the person in control of the AI can’t be truly responsible for the output they create.

Moreover, AI is a distraction from what should be the real focus of computing: to augment human intelligence and to help us think and work together — I am much less interested in something describing itself as an artificial intelligence than I am in a system that can properly interface with my and others’ human intelligence: of which there are currently no systems available.

The immense creative possibility of people has barely been tapped, yet the computing industry seems intent upon building systems to delegate creativity and decision-making to machines; we don’t even need this, we need tools that at least don’t destroy our ideas or, better, increase their scope, scale and leverage.

  1. We should build our information systems on the above-described hyperstructure model: my company HSM will be issuing products and services soon that are thus arranged.
  2. We should shun siren servers: do not let enormous corporations harvest your data and tell you that you are obsolete or that the data that they monetize is worthless.
  3. We should focus, rather than on creating purportedly intelligent machines, on building machines that increase the nuance, scope and leverage of our own, human, intelligence.
  4. Whatever useful technology within the category of AI that we use, we should use in a way that doesn’t exploit the majority to benefit a minority.
  5. We should build clear interfaces that expose the ramifications of our actions, not pretended conversations papering over everything that has been wrong about technology for decades.
  6. We should not lose our humanity nor forget what makes us human.

It seems apt, therefore, to quote from a section of the HSM Philosophy, on the relationship between humans and machines (I apologize for the format, it is intended to be a table with the “people should” and “systems should” parts juxtaposed, not possible on Medium):

  • People should value human beings first and foremost, especially for the things that make us human: systems should be humane.
  • People should make decisions and control our own destiny: systems should not make decisions nor control people’s destiny.
  • People should not have to undertake repetitive and unsatisfying work: systems should undertake repetitive and unsatisfying work.
  • People should never compromise their humanity or individuality for a machine: systems should never manipulate people, but should facilitate human freedom, collaboration and agency.
A still from the movie They Live by John Carpenter. The protagonist, putting on a pair of mysterious sunglasses, sees true reality: propaganda demanding submission and consumption, obscured by today’s colorful consumerism.
A still from the movie They Live by John Carpenter. The protagonist, putting on a pair of mysterious sunglasses, sees true reality: propaganda demanding submission and consumption, obscured by today’s colorful consumerism.

I’d like to quote some Marx. For the record, I’m not a Marxist, nor do I endorse his program; his critique of religion, however, is one of the best:

Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.

The abolition of religion as the illusory happiness of the people is the demand for their real happiness. To call on them to give up their illusions about their condition is to call on them to give up a condition that requires illusions. The criticism of religion is, therefore, in embryo, the criticism of that vale of tears of which religion is the halo.

Criticism has plucked the imaginary flowers on the chain not in order that man shall continue to bear that chain without fantasy or consolation, but so that he shall throw off the chain and pluck the living flower.

You may, dear reader, be a religious person yourself, but I hope you will agree with me that:

  1. Marx is clearly right when it comes to religion’s role as a calming agent for distressed people, especially as an arm of the state and that
  2. While religion is not in itself unethical, it is quite unethical to make truth claims that one cannot prove and to force children to adopt them. Frankly, anyone who teaches children a specific belief system beyond anything other than human solidarity and an open, critical mind essentially disproves their own point — they don’t wait until their children reach the age of enlightenment to instruct them because they know how much harder it will be.****

My claim is that AI generally and chat interfaces specifically adopt this role today: they are the imaginary flowers adorning the chain of inhuman technology; our children (as I was) are taught that the type of technology we have just is, or are presented with technology (such as the insufferable tablet computer in the hands of a toddler) without explaining that it is a narrow, contingent vision of what can be.

The similarity between organized religions (most of which promise favor from an all-powerful deity) and the pursuit of super-human AI is no coincidence: it is the abdication of the imperative to build better tools and help other people ourselves, wishing instead for a higher authority to shoulder the burden. I say that we should cast off this chain and build truly graceful systems.

If you would like to be part of this project, please connect with me.