They say the time to get worried on an airplane is when the crew look scared. So if 1,100 of the top technology leaders and developers publish an open letter warning of the potential risks of AI to the future of the human race and calling for a 6 month moratorium on further AI development, people like Elon Musk and Steve Wozniak (but interestingly not Bill Gates or Mark Zuckerberg), it surely has to be time to be, at the very least, concerned.
The trigger for this major red flag would appear to be the release of ‘ChatGPT’ by OpenAI, the output of their Artificial Intelligence system, ‘GPT- 4’. Let’s unpack this. OpenAI is a research laboratory funded by the likes of Elon Musk and latterly Bill Gates/Microsoft. OpenAI is a not-for-profit organisation but it has a very much for-profit subsidiary, OpenAI Limited Partnership that commercialises what the lab develops. The launch of ChatGPT took the valuation of OpenAI LTD to over $29 billion. GPT-4 stands for ‘Generative Pre-Trained Transformer, 4th edition’ and it is a ‘multi-model large language model’ which in effect means something very, very intelligent that you can talk to through its chat box, ChatGPT. You can ask it questions and it will reply, like a kind of talking search engine (Google are very worried about it because their version, ‘Bard’, is by all accounts not as good). But ChatGPT can do much more, it can create stuff, letters, essays, stories, poems, it can even write code.
The Open Letter calls for GPT-4 to be the line in the sand, the signal to pause and figure out how this should all be governed, what oversight and guardrails need to be put in place to avoid AI getting out of control and to avoid unintended consequences.
I have been worried about AI for a long time and I have occasionally shared my concerns and what drives them in my blogs and eBooks. I’m not a technology expert by any stretch of the imagination but I do have a very elastic imagination (which in my defense Einstein said was important than knowledge). Imagination is fueled by, to give it the posh word, the zeitgeist. You sniff what is in the air culturally, join the dots and let your imagination do the rest. I can be more specific than that and point to three things, three realizations that joined some dots and got me worried about AI, social media and robotics.
The first realization was that if you see it in the movies (or in the pages of science fiction) it has a habit of coming true – science fiction more often than not becomes non-fiction. If you live long enough you see the widespread adoption of technologies that were wild ideas in old films, TV shows and books – digital photography (‘The Man who fell to Earth’), the iPhone (Star Trek), virtual reality (‘Brainstorm’), ChatGPT (computers like Hal in ‘2001 A space odyssey’ and just about every other deep space film or show). We can’t yet ‘beam me up Scotty’, nor do we have robots that are indistinguishable from humans, robots that can form and reform into any shape or alien invasions intent on mining the earth so it hasn’t all become reality, yet…..
Nevertheless, it struck me that sci-fi is a kind of forward memory so when I watched movies like ‘Phenomenon’ or ‘Lucy’ it made me think about the possibility of the human brain working at 100% capacity but when I watched ‘The Matrix’, ‘Ex-machina’ or ‘Westworld’ I worried that machines would get there first because, motivated by power and greed, we would facilitate this.
The second realisation was that the internet was becoming a connected global brain (I’m not claiming to be the only person to have spotted this). On the one hand the internet was becoming the warehouse for every single thing anyone and everyone had ever written throughout history, every single human artefact, and on the other hand the world was increasingly uploading every thought, comment, experience and bit of data with the ability to search and connect all this, therefore we were/are witnessing the creation of the biggest brain imaginable. Not a human brain belonging to one person made up of soft tissues, cells, blood, nerves and neural pathways but a digital brain that combines the brains of every person past and present. The key to unlocking the power of this global digital brain would lie in the intelligence of whoever – or whatever – interrogated it. You need a powerful search engine and the most efficient intelligence using that engine – the human brain is highly efficient in some ways but it is nowhere near as fast as super computers with artificial intelligence. We may (at the moment) be able to ask smarter, more insightful and imaginative questions but at a maximum rate of 1000 per second (one synaptic transmission per millisecond) which is 10 million times slower than a computer. A computer can pose 10 billion questions in one second and process the answers just as fast.
“When someone points at the moon only the fool looks at the finger” Old Chinese Proverb.
Since I first came across this old proverb decades ago it has stuck in my mind. It resurfaced when thinking about social media and sparked my third realization. Like many people I was greatly affected by Netflix’ ‘The Social Dilemma’, so much so I wrote an eBook about it. In that eBook I touch on the idea of unfettered AI possibly being the beginning of the end of life as we know it but the main idea I put forward is that AI powered Social Media is the enemy of the kind of Socratic debate we desperately need to address the world’s challenges. The solutions I put forward are very simple a) we need to create the means for people to own and transact, on their own terms and for their own benefit, their personal data and b) there has to be regulation to force social media to be subscription based and not dependent on advertising – if you don’t pay for the product you are the product. Notwithstanding, my realisation was that if AI were used primarily for commercial purposes by organisations impervious to oversight, governance and, where necessary, regulation then the finger would not be not pointing at the moon, it would be pointing at existential jeopardy. My realisation was that the unintended negative consequences of AI driven Social Media was the canary in the cage and it just fell off its perch.
In times of trouble turn to the BBC (Radio 4 to be precise). In 2021 Stuart Russell delivered the Reith Lectures and his subject was AI and how to get it right. Russell is Professor of Computer Science at Berkeley and an expert in AI. I commend everyone to listen to his series of lectures. It reassured me somewhat but like most people, if you feel worried but powerless you gravitate to any wise person who says it will be, just might be, OK. Having read the Open Letter I have been jolted out of my fragile sense of security.
I am determined to end on something positive but I must first join some more dots that I see. The Transgender debate is toxic, it is nigh on impossible to make any kind of comment or observation without being branded a ‘transphobe’ but I will try nonetheless. Some people have commented that gender fluidity has tended to surface towards the end of a particular civilization or empire and have found examples of this (possibly with selective bias) going back thousands of years. Who knows whether transgender presages the decline and fall of empire but for sure it is not a new cultural phenomenon. I would simply make the obvious observation that if we are heading for a world where AI machines and robots increasingly replace humans then gender is irrelevant, even biological gender aka sex, because biology is irrelevant.
Right now I do not know whether to be more concerned about AI or Geo-Politics or Climate Change or another Pandemic (zoonotic or man-made) – we live in troubling times and for the most part I choose to worry more about whether Steve Borthwick can turn around the England Rugby Team in time to be competitive at the forthcoming World Cup. (If I had time I would explain that sport is honestly something that gives me faith in humanity, it shows that at our best we can embrace diversity and compete while still remaining friends).
But back to Geo-Politics, it is hard to ignore the threat of a global confrontation of ideologies. On the one hand you have China and Russia (and Iran, Saudi Arabia among a few others) who reject democracy in favour of a totalitarian and repressive form of government. Even if the more liberal democratic nations come together to respond responsibly to the challenge of AI can we assume the same will be true of totalitarian states? China has very deliberately amassed the tools to control its society through the control of all their data, they are hardly likely to hold back on the most advanced use of AI to further this aim and enable them to establish global hegemony. Anyone who doubts this just read ‘The Great China Plan’ – they pretty much spell it out. They invented the idea of ‘kow tow’ and they intend to bring it back.
If the west takes a 6 month moratorium, will China and Russia do the same or are we just handing them a lead at the most crucial time?
I said I would end on something positive, and it is this. As Social Apes we are only meant to be able to live in small groups of 100 or fewer. At the start of the first millennium the world population was about 100 million. The biggest city was Alexandria with about 1 million (similar to Rome 200 years later) but the few other cities that existed were much smaller than that, less than 100 thousand, most people lived in villages. By 1100 the world population was somewhere between 300 and 400 million and cities had not grown in size or number. In the 1940’s the world population had exploded to over 2 billion, today it is 7.9 billion, there are over 500 cities with more than a million people and 31 megacities with 10 million or so. Fewer than half of us live in rural villages, most of live in cities. How is this possible – technology.
Technology, including AI, perhaps especially AI, has the power to solve more problems than it creates. If we look at the some of the most troubling and intractable problems we face – climate change, health, food & water poverty – it might be that only with the power of AI can we hope to address them. As the Open Letter ends:-
“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”
As for China, history shows repressive societies always fail. Forget Orwell’s ‘1984’ and its depressing ending where Winston Smith accepts his death in service of ‘The Party’ while selling everyone out. Read Karl Popper’s “Open society and its enemies’ written at the end of World War II – liberal democracies will always win (eventually) because they champion peaceful progress and that is what most of want and, ironically, will fight for.
In China there is no doubt the driving force behind AI is to control people, suppress government opposition (as we saw in Hong Kong) to support totalitarianism. In the West the driving force is commercial and in a democracy that can be controlled. It does however require international collaboration to create effective regulation and accountability.
While writing this post I had BBC Radio 4 on in the background and this very subject was being debated, the Open Letter and its ramifications. The conclusion was that it was welcome and timely and that it should and would engage democratic debate. Here’s hoping – at least some of the airplane’s crew look calm.
Edited version of the Open Letter
Pause Giant AI Experiments: An Open Letter
Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control…….
………We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable……
………..Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.