Why I No Longer Believe in AI
AI is not about "intelligence", artificial or otherwise. It is merely the automation and leveraging of information.
The term “AI”, being the short form for Artificial Intelligence, was first coined in 1956 by John McCarthy, a computer scientist at Stanford University. The concept has been widely popularised by many movies ever since, including The Matrix which also introduced us to the idea of “The Red Pill”.
Artificial Intelligence represents the vision that machines will one day become intelligent and, therefore, be capable of thinking for themselves. Moreover, it is the foundation on which the latest craze for technological utopia is to rest.
I was about twelve years old when I first became enthralled by the idea of a “thinking machine”, and it was a fascination which drew me into computers and programming. You see, the portrayal of AI in the science fiction of my youth was generally a positive one. As the years went by, however, my enthusiasm for it waned. In its place, despite some apparent technological advances, there grew a feeling of vague disappointment — as if my heart could sense there was something wrong, but my mind could not quite put a finger on what it was.
Today, I think I know what it is — the modern realisation of AI bears no resemblance to the dreams I once held for it on so many levels.
For one, the AI with which we are now familiar is nothing of the sort — it does not represent “intelligence” or any meaningful form of artificial thinking or consciousness. It is merely clever programming, or let’s call it “algorithmic engineering”. In itself, there is nothing wrong with “clever programming”. In fact, it could be a fascinating subject, provided we are clear on what we are dealing with and its limitations.
For another, the fictional depictions of AI which so captivated me in childhood generally had one thing in common which is absent today — virtually every scifi robot (such as “R2D2” from Starwars) or spaceship computer (such as “HAL-9000” from the movie 2001 A Space Odyssey) was a unique, independent and autonomous agent which made its own decisions. In other words, these were depictions of machines which arguably had minds of their own (I now regard “mind” and “machine” to be mutually exclusive terms). Indeed, that was what I understood the very point of Artificial Intelligence to be.
Today’s so called AI appears to be overwhelmingly centralised in its implementation. When you speak to “Alexa”, for example, it has no uniqueness or anything which can be described as “personality”. It doesn’t even “process” what you say to it. Rather it is nothing more than a dumb terminal which transmits your words to a centralised mainframe (or more precisely a corporate “cloud” of computers) controlled by Amazon. The same goes for Apple’s “Siri”.
Above all else, I find that I can no longer endorse or condone how AI is being applied — against people. This is not even AI after all! It is merely the automation and leveraging of information for purposes of wealth extraction, centralisation and control.
The term “Big Data” is a popular technical buzzword associated with AI. It describes the technology behind the vast data stores used to feed the algorithms. But just what is all this data being stored actually about?
Well, yes, it’s invariably about people — where we go, who our friends are, where we shop, along with traits which indicate our level of education, outlook, health, emotional states and behavioural tendencies.
Here, for example, is a British consultancy firm extolling the virtue of AI in its marketing material:
This behavioral data is invaluable because if you also overlay that with AI, it’s possible to accurately predict your customer’s next actions and have the perfect campaign ready — Red Eye International Ltd
Shoshana Zuboff is a Harvard Professor who sees a different side to things. She has a name for what is happening below the level of everyday awareness. She calls it “Surveillance Capitalism”.
Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. ― Shoshana Zuboff, The Age of Surveillance Capitalism
When Gmail first arrived, it offered a gigabyte of email storage for everyone — for free. This represented a massive amount of storage space and, at the time, I wondered just what the business model behind it was? All we had to do in return, it seemed, was to accept that Google collected some “data” for purposes of showing more relevant ads, but otherwise it was something for nothing.
The word “data”, however, is a misnomer which disguises the true nature of that which is being taken from us. We actually need a word far more apt than mere “data”, for it is not merely a stream of abstract numbers of interest to no one except statisticians and geeks. Rather it is the flow of sovereignty and free-will from individuals to those who control the machines.
Today, I am rather of the mindset that anything “free” is not worth it — the ultimate price is too high. For, as we have experienced over recent years, things do not stop at mere behavioural prediction, but extend to “nudging”, manipulation — and ultimately far beyond toward outright control. As Zuboff writes:
It is no longer enough to automate information flows about us; the goal now is to automate us.
In an article published by The World Economic Forum, it is clear that its author sees a future in which all things are to be produced by machines powered by AI, as people are side-lined and left to focus only on “leisure, creative, and spiritual pursuits”.
With the right mindset, all societies could start to forge a new AI-driven social contract, wherein the state would capture a larger share of the return on assets, and distribute the surplus generated by AI and automation to residents. Publicly-owned machines would produce a wide range of goods and services, from generic drugs, food, clothes, and housing, to basic research, security, and transportation — Sami Mahroum, How an AI utopia would work, 2019. The World Economic Forum
Shockingly, the author promotes the idea that the only kind of work that will be needed will be that geared toward the accumulation of status and wealth, with other work previously “necessary for a dignified existence” being “all but eliminated”. If this were to come to pass, I cannot actually see any requirement for government and their stakeholders to distribute anything to mere residents as promised, given that we will no longer serve any purpose or have any say in things. Rather, I see such a future as being one in which people simply find themselves unnecessary and disposable.
In the 19th century and early 20th, the automation of labour via mechanisation granted power to industrialists, but had severe consequences for skilled people. It has long been argued since that it was a necessary price to pay for progress. Certainly, I would not suggest that the weaving loom, for example, could or should be uninvented.
Today we face a parallel upheaval, but one which threatens our very existence, if not our souls. I rather feel that this time, however, things will have a profoundly different outcome.
The reason I claim that AI does not represent any kind of true intelligence is rather profound, and involves such things as uncertainty, information and free-will. These were not things I understood years back, at least beyond the every day usage of the words. However, I now feel they are important as they represent the seeds of destruction for the dreams of technological utopia or, more accurately, dystopia.
The very definition of “machine” is that which transforms energy from one form to another — but it does not generate it out of no where. Likewise, computers transform information from one form to another, but they do not generate it. They merely reflect back information they get from elsewhere in a different form.
Minds are things which possess free-will, whereas deterministic rule-based machines do not. Human minds, like others in the animal kingdom, are not fundamentally deterministic or algorithmic in nature, but tap into uncertainty at the quantum level (*).
In my youth, the buzzword of the day was “digital” — this was the future we were told. The very purpose of digitisation is the elimination of uncertainty by collapsing infinite possibilities granted to us by the Universe into a finite number of discrete and knowable ones. In this way, it is possible to copy digitally encoded music indefinitely, without loss of quality, with each subsequent copy being absolutely indistinguishable from the original. In an entirely digital world, however, you lose something which hardly seems to matter at the first, but it really does — the ability to bring into existence new information.
Living things, in which I would include human societies as they form out of our interactions with each other, cannot grow in isolation but must interact with and exchange information with other autonomous members in order to thrive. If you place human beings in solitary confinement, for example, we degenerate both mentally and physically over time. Likewise, without the ability to generate new information, a so called utopian society will not be able to sustain itself but will become hollowed out and brittle. Unable to innovate and adapt to change, things will simply fall apart as entropy increases in what is essentially a closed system of strict determinism.
In short, I do not believe that Surveillance Capitalism or the technological utopia of the WEF have any sustainable future, but will collapse in an implosion of nothingness. This gives cause for hope, at least in the long-term, as new shoots will then be able to emerge.
I retain an interest in computers, but only when applied to meaningful endeavours. Computers should not be turned inward on ourselves in order to automate our lives and deconstruct all that we are.
Rather than accept the “free” but trivial conveniences offered by the business model of Surveillance Capitalism, or wait for the WEF’s dystopian dream to arrive, I believe that we should build a free-market business case for independent and unconnected devices, along with private computer networks with well defined boundaries to the outside world.
Andrew Thomas
(*) Roger Penrose, The Emperor’s New Mind. 1990.
First published in the August edition of New English Review Magazine. Also slated for publication in print form in Bournbrook Magazine and The Light paper. Banner image inspired from a scene in the 1927 movie Metropolis.
I love the term "Surveillance Capitalism" it is becoming more and more prevalent to describe all forms of social dis-ease as "Something or other Capitalism" such as the current raging trend of "Philanthro-capitalism" as prescribed by the likes of Gates, Clinton, Soros et al.
Obviously, using these terms puts me in the firing line of the pre-programmed un-washed as a raging Marxist-Corbynite for designating the capitalism "god"" in a such a negative sense, surely in the past I would have be burnt at the stake for heresy, but in today's "Climate Capitalism" society, think of the emissions!!