Humans and Machines: Heaven or Hell?
The following post is an amalgam of recent talks I have given on some of the ideas in my latest book, The Machine Age. This book has been published so far in the UK, USA, and Germany.
I want to tell four stories about the relationship between humans and machines. Each offers a vision of both heaven and hell. After that, I have one more story to tell, which is the scariest of all.
Four Stories
First and most familiar is the impact of machines on jobs. Automation has been fraught ever since the Luddites, early 19th century British handloom weavers, started smashing the power looms which were destroying their jobs and the poet William Blake conjured up a vision of dark satanic mills sprouting across England’s ‘green and pleasant land’.
Over forty years, from 1820-1860, handloom weaving was extinguished. But in the same period the Industrial Revolution took off with spectacular results. In 1820 the population of the UK was 25m. Today it is 70m. In 1820 real per capita income was about £1000-£1500. Today it is around £30,000. The simultaneous expansion of population and living standards over the last two hundred years - defeating the dire predictions of Malthus - was made possible by machinery.
The spread of machinery not only provided a seemingly endless stream of replacement jobs at higher wages but supported a growing population.
This whole achievement hinged on the existence of potential jobs to replace those being lost. The question today is: can this continue?
It is now claimed that up to eight million UK workers could be replaced by AI within the next ten years; perhaps sooner.
Historically, when you automated something, the people moved on to jobs which hadn’t yet been automated. But you have AGI or artificial general intelligence (some call it superintelligence) the situation is different. AGI can take over all the new jobs created by the automation of the old jobs.
The optimists tell us not to be alarmed by this. They foresee a steady ascent in the quality of jobs as their routine parts are farmed out to robots, and humans are freed for higher-value (more creative) work.
Pessimists like Martin Ford and David Susskind argue that the new jobs created will be fewer in number and worse in quality than the jobs they replace. They paint a picture of ‘lovely jobs at the top, lousy jobs or no jobs for everyone else’. Dystopian films and fictions tell the same story. Their trajectory is from the satanic factories of Fritz Lang’s METROPOLIS (1927) to the spaceship of bloated, atrophied humans of Pixar’s WALL-E (2008).
So the current debate is stalled between the tech-enthusiasts who promote AI as ‘enhancing’ human performance and those who want to slow it down to avoid replacing humans by robots in all walks of life.
However, there’s another strand to this debate which goes like this: What’s so bad about having to work less, provided you receive replacement income? Haven’t we all dreamt of having less to do, having more time for fun and games?
This was technology’s promise as told by John Maynard Keyes in his Economic Possibilities for our Grandchildren in 1930.
What Keynes said in a nutshell was that technical progress was bringing Paradise within reach of all. He worked out that in three generations - roughly in a hundred years from when he was writing - technological progress would give the prospective population of the ‘civilised’ world a standard of living between four and eight times higher than in the 1920s, obtainable at a small fraction of its current workload. Freed of the burden of toil ordinary people would be able for the first time in human history to live ‘wisely, and agreeably and well’. Machines would do all the necessary work, making possible a return to Eden, where ‘ neither Adam delved nor Eve span’.
Keynes didn't actually spell out what people would do when they had no work they had to do.
He was a Cambridge don and reading books, listening to music, and thinking great thoughts in beautiful settings surrounded by beautiful friends and pictures, might strike some as an ideal life.
However, it may be that many of us would find it rather depressing to be deprived of their main purpose in life. Even Keynes thought that the prospect of endless leisure would produce a collective nervous breakdown.
At any rate, it hasn’t happened. Most of us are nowhere near a 15-hour work week, though standards of living in rich countries are 4 or 5 times higher than in 1930.
So why have hours of work not fallen in line with Keynes’s expectation?
There are five reasons, two of which Keynes allowed for and three others which he ignored.
The two which he allowed for were population growth and wars. These are infallible means of recreating scarcity just when one thinks one is over the hump of necessity. Today the question is: will Malthus or Mars be the first to overcome Prometheus?
The three obstacles Keynes didn't allow for were as follows:
The first is human instability. He thought human needs might be quite quickly satisfied, but he ignored the phenomenon of relative wants - I want something because you’ve got it - creating a desire for more and more, fuelled by relentless 24 hours advertising. Advertising was certainly going on in Keynes’s time but not the relentless, minute by minute advertising targeted at all internet users - now the vast majority of the today’s rich country population - almost forcing continuous shopping on an addicted population.
Secondly, Keynes ignored jobs as a source of identity, and joblessness therefore, even if compensated, as a curse to be resisted rather than a blessing to be embraced.
Thirdly, Keynes ignored the question of distribution, and therefore of power. He assumed that the gains from efficiency would go to everyone, not just to the few. But there is no automatic mechanism to ensure this, and since the ascendancy of neoliberal economics in the last forty years, the social mechanisms for securing real wage growth have weakened or gone into reverse. While some people have reduced their hours of work because they can afford to, many others are compelled to work longer than they want to in a desperate effort to hold on to what they have already got.
To sum up this debate: the assertion that AIs will soon be able to outperform humans in any task humans do raises the disturbing question: what is the value-added of being human? Will humans not become redundant? The answer that humans uniquely have a soul or consciousness is unconvincing to materialists, who believe, with Descartes, that the soul is located somewhere in the brain. The human mind is only a complicated kind of brain, and there is in principle no obstacle to building artificial brains with souls. All such mad thoughts, we must understand, are being heavily financed by tech oligarchs.
My view is that if we want the work and technology story to end happily we will need to slow down the rate of job destruction, think more carefully of job replacements, think very carefully about replacement incomes for scaled-down jobs, and institute swingeing wealth taxes to pay for them.
And we might also ponder this: how many of the technologically enabled improvements of (say) the last fifty years, can we readily imagine doing without?
My second story is about the impact of technology on health.. ‘A paradigm shift: How AI could be used to predict people’s health issues’. screamed a Guardian headline of a few weeks ago. An AI trained on 5 years of data and 10bn events such as hospital admissions, diagnoses, and deaths, will be able to predict the onset of 1000 diseases, allowing doctors to offer ‘more focused’ screening tests and preventive medicines.
The medical dream is that AI will enable us to live longer and more healthily. It promises to reverse, at least partly, the ageing process. We are on the brink of a generation of technologically enabled centenarians. Surely this is pure gain? To which the right answer is that it’s the quality of life matters more than the quantity of years - ’better to die gloriously than live uselessly’ as the proverb has it. Artificially prolonged life would strike many people as a horror story. Hence the growing movement in favour of assisted dying.
A third topic of current discussion concerns the impact of technology on social cohesion and mental health. The great benefit of the social media is said to be the empowerment of ordinary people through unprecedented access to information. By overturning the authority of professional and religious gatekeepers, they release a pent-up flood of democratic activity, political and creative.
But with this go the atomised, solipsistic relationships of humans with the internet which replace person to person relationships, and which lead to the growth of internet diseases - alienation, isolation, pornography addiction, and so on. The more digital technology promises to free its users from the constraints of authority, so does the demand grow for restriction of access and control of content to guard against these addictions. Here we seem to be faced with a dreadful race between technology and mental and social breakdown.
A fourth and, related focus, is the impact of of AI on politics. Social media are routinely said to undermine democracy by spreading disinformation and conspiracy theories. The control of mass media outlets by tech billionaires Elon Musk, Peter Thiel, Jefff Bezos, Mark Zuckerberg and a few others threatens the end of free speech as we have known it. Their collusion with Trump’s clampdown on the media carries the symbiosis between money power and politics to a new level.
The main thread running through the stories above is that as technology is applied to an ever wider range of human activities much more effort will be needed to ensure it remains safe and healthy. Prominent leaders in the AI field like Stuart Russell, Max Tegmark, and Yuval Noah Harari have called for pauses in research and deployment to allow time for reflection on the existential risks technology poses. The ideal end of such a pause might be a global agreement to ban, or at least slow down, certain types of research or development, on the ground that it is too dangerous to allow it to go forward unchecked.
But here I come to what is to me the most frightening part of my tale.
Weaponising AI
Some of you may know the famous scene at the start of Kubrick’s 2001: A Space Odyssey, when one of our fur-covered ancestors picks up a bone from a skeleton lying on the ground and realises that it can be used to fight off enemies. Having killed the leader of a marauding group, this humanoid throws up the bone in the air in triumph where it transforms before our eyes into a slender spaceship speeding towards Jupiter.
This scene is a timely reminder that technology started off as a weapon of war, whether for hunting animals or fellow humans.
I don’t wan’t to suggest that all technology has been developed with a military purpose. The phrase ‘turning swords into ploughshares’ suggests the exact opposite. Also printing certainly didn’t start off as a weapon of war, though rulers quickly saw its value for propaganda purposes.
But much more often than not the creativity of the scientist has been channelled towards military purposes. An early example is Archimedes who got distracted from pure thought by the command to build defences for the protection of Syracus, resulting in the invention of the catapult. And we know about the great physicists and mathematicians involved in building the atomic bomb. Over the centuries governments have continually subsidised inventors not to produce a better life but to produce more efficient ways of killing.
AI as we know it today was incubated in war and war preparations. The computer wasn’t born in scientific institutes working for the common good but in the UK’s Bletchley Park and the USA’s DARPA programmes, the first designed to break Germany’s wartime code, the second to keep the USA ahead of the Soviet Union in the Cold War. These developments led to the internet, first developed for the purposes of military intelligence ARPANET, a US Department of Defence programme.
All the voice and facial recognition systems,which are now ubiquitous, were first developed to serve military and intelligence needs. Of course, there have been civilian spin offs from which we all benefit. But their military and intelligence deployment has grown in parallel. My fear is that the latest weaponization of technology won’t allow for civilian spinoffs because there won’t be any civilians left.
Why do I say this? It’s because all the brakes which we might want to put on AI development for good of us all will be cancelled when it comes to military development, because we must ensure, that our AI is better than your AI.
The optimists will say that even countries at war or potentially at war will still be able to reach agreements to stop the development and deployment of weapons which would cripple or destroy them all. They cite the Geneva Protocol of 1925 banning the use of poison gas, and the various non-proliferation and arms control agreements which have sought, with some success, to limit possession and development of nuclear, chemical and biological weapons.
These were notable achievements in their day. But such weapons were specific and identifiable, so their development was subject to inspection and control. However the threat of AI weaponry is more diffuse, since it penetrates nearly every domain of military operation: drones and robots, intelligence surveillance and reconnaissance, cyber warfare, hacking and disinformation, command and control enhancement.
In such a world AI research and development is part of the arms race; AI policy becomes a matter of making sure that our AI development stays ahead of that of our potential enemies.
As Britain and Europe rearm, a key source of funding for AI development will be the defence and intelligence services. What kind of AI we want and for what purposes will be determined by security requirements. And this will be true of all countries playing the zero-sum game.
The dominant view today is that we have returned to the Cold War situation, or even worse. Fiona Hill, a member of the UK Government’s Strategic Defence Review, believes the third world war has already started. The Review itself demands total society mobilisation against the threat of multiple weapons systems which might be deployed by terrorist or malign states.
In such a world the difference between peace and war breaks down: we are always at war, need to be on guard against often silent and secret threats. And that portends the end of free speech, free assembly.
Last week our UK Parliament passed an amendment to the Terrorism Act 2000, adding the Palestine Action Group to the list of proscribed organisations. I speak with some feeling about this because the 75 year old mother of my daughter in law was arrested and detained overnight for taking part in one such protest. I’m not suggesting that we have reached the full Orwellian state but there is an Orwellian creep which requires our constant attention.
So am I an optimist or a pessimist? Is the glass half full or half empty? I would describe myself as a neo-Luddite. Technological innovation, I would suggest, has reached the point of diminishing returns. The small incremental values it can still add to human existence is overwhelmed by the threat of destruction it brings. We have striven mightily for thousands of years to get to this point. Now we should be setting about enjoying its fruits, not gearing up for a new bout of collective destruction.
"An AI trained on 5 years of data and 10bn events such as hospital admissions, diagnoses, and deaths, will be able to predict the onset of 1000 diseases..."
Whereas a healthy life style would eradicate almost all those diseases.
Essentially, what is needed is:
1. Freedom from pollution with poisons by air, water, food, etc.
2. Healthy nutrition. That is simpler than most of us realise. As a first approximation, any human can survive and thrive on a diet of fresh, fatty red meat (beef, lamb, venison, etc.) and clean fresh water. Nothing else is necessary, although many might want more variety. All carbohydrates are to be avoided as a rule.
3. Enough sleep: at least 8 hours of uninterrupted sleep for adults, and much more for children and teenagers.
4. Exercise. Our ancestors evolved to be in motion much of the time. For millions of years they survived by hunting down prey, essentially outlasting them. That would have meant many hours of walking and running every day, with occasional jumping, climbing, carrying heavy loads, etc. That amount of exercise wipes out sleep problems: your head hits the pillow and next thing you know it's morning.
5. Sunshine. The ancestors evolved in Africa, and shed their fur as they had to run more and more. They were in the full tropical sun for hours every day, which is perhaps why the inhabitants of those regions are black to this day. Even in Britain anyone who spends enough time outdoors in summer will get a healthy chocolate tan withion a few weeks, They will thus get plenty of Vitamin D, nitric oxide, and other healthgiving compounds created by UV on skin.
6. Love and companionship. While our need for company varies greatly, very few humans can be healthy when alone for long periods. (Some can).
7. Opportunities to be useful to our fellow-humans, with the respect that comes back to us.
Obviously such a list implies a whole mass of difficulties and objections. But making progress towards giving everyone all those 7 things would do a lot more for health than all the "human rights" in the world.
Thanks for a great synthesis.