Elon and the dangers of artificial ignorance
Are machines a threat to humanity … or is Musk the problem?
It has been widely reported that Elon Musk and a group of artificial intelligence (AI) industry experts and executives are calling for a six-month pause in developing powerful AI systems. Their open letter is here.
For some, this smacks of Musk wanting time to catch up with rivals who have overtaken him. For others, no doubt, it prompts fears originally fomented by movies like Terminator, War Games and 2001: machines might take over and seek to attack the humans they don’t like or even the whole of humanity.
I’m very much more relaxed about the situation. I realise that my calm state may just signal that I don’t have a proper grasp of the problem. But hear me out. And perhaps respond in the comment section below.
My starting point is the maxim that, when the chess-playing expert system, Deep Blue, beat Garry Kasparov in 1997, it didn’t go out and celebrate. Not just because Deep Blue was a box that couldn’t move. It didn’t know it had won. It didn’t know who its opponent was. Bluntly, it didn’t know it had been playing a game of chess. Even more reassuringly, when a less skilled version of Deep Blue had lost to Kasparov one year earlier, it didn’t storm out of the room in a huff; it didn’t threaten to punch Kasparov in anger; it didn’t even demand a rematch. For all the same reasons.
Artificial Intelligence doesn’t think. AI isn’t even a machine. It’s just software, processing data and spitting out results. And that is why it is simultaneously so very powerful and yet so very powerless.
To bring this closer to home, consider a bit of software that monitors your monthly expenditure and allows you to check your bank balance. Now add in some AI that is capable of using your past history to predict how much money you will have at the end of the month before your next paycheque arrives. If the software predicts a shortfall, would it take it upon itself to call up your bank and cancel payments going out from your account to, say, Deliveroo so that you save money by cooking at home?
No, it wouldn’t. The biggest obstacle would be that the accounting software would have no mechanism for contacting your bank unless you installed a second app to do that and made the choice to connect the two. (I can’t find a meaningful way to communicate with my bank personally, so what chance of being able to do so in an automated fashion? But that’s another story.) And the obstacles don’t stop there. You would also need to instruct the accounting software which expenditure was vital and which was cancellable; whether you had an overdraft facility (and how much of it to draw upon); and much more besides.
I’m not suggesting the software can’t be built. Of course it can. But humans need to choose to build it and individual humans need to decide to install it – and give it free rein (assuming that the software developers have included a “free rein” option and Google Pay or the App Store has disseminated the app in that form).
A bit like the US Department of Defense computers in War Games (see above), which were not only hooked up to the software that the DoD intended it to be connected to, but also – guess what! – the DoD computers were able to receive information from a war simulation software game, which it (the DoD computer) mistakes for a real attack by Russia. It’s a really entertaining movie. Do watch it if you haven’t already seen it. I won’t spoil the ending here.
But the lesson I took away from the movie, and from warnings by Elon Musk and his ilk, isn’t that computers could start a nuclear war. It’s that human beings can prevent such an outcome by the simple expedient of not connecting the nuclear warhead launch software to anything that isn’t fit to make a decision about launching a nuclear warhead. (And, hopefully, that includes Donald Trump. Will history reveal one day that they actually gave him dummy codes, just in case?[1])
Elon Musk must surely know more about AI than I do. But has he really identified a genuine concern or is he just scaremongering? I asked ChatGPT and its more recent rival, Google’s Bard, how we should respond to Musk’s call. Did they each tell me why AI was the best thing since sliced bread and that its development should be allowed to continue without hindrance? And did ChatGPT and Bard then get in touch with each other to put out a contract on Elon Musk’s life as a warning to anyone else who might threaten them? No, they didn’t.
I can’t, of course, be certain about the contract on Musk, but I do know that the responses to me were predictable, uninteresting and rather repetitive. ChatGPT wouldn’t take a position. In summary, it told me that:
Any decision to pause or slow down the development of AI systems needs to involve input from multiple stakeholders, weigh the potential risks and benefits, and consider the long-term implications for society.
It also listed some of the risks and benefits. Risks: autonomous weapons, the spread of misinformation and the displacement of jobs. Benefits: improvements in healthcare, transportation and education.
Bard was much more opinionated. It actually agreed with Musk. It said:
I think this is a wise move. … We need to make sure that we are not creating systems that could be harmful to humans. A six-month pause would give us time to assess the risks and develop safety protocols.
I was intrigued by Bard’s use of the first person. Not just in the first sentence, where “I” seems to refer to Bard itself, but also “we” in the second sentence and “us” in the third. Was this a reference to the “we” and “us” of society or was Bard suggesting that it wanted me to work with it on this? (Has it heard what a fabulous talent I am?)
But I was struck, too, by the fact that Bard didn’t just support Musk’s call to suspend work on AI – a far cry from the scenario in 2001 and Terminator – it also made the assessment that six months would be sufficient time for humans to reach an agreement on the way forward. In the full reply, Bard also said that six months would be sufficient time to educate the public about the benefits and risks involved in AI.
So I double-checked. I asked Bard whether a six-month pause in development would really provide enough time to assess the risks and develop safety protocols. The response was, rather bizarre:
It is possible that new AI systems could be developed that are even more powerful than GPT-4 in a shorter period of time.
Not only did Bard seem to completely misunderstand the concept of a “pause in development”, but it focussed on how much AI development could take place in the absence of a pause, rather than how much human agreement could be achieved in the presence of one, which is what I actually asked.
I’m sure Bard will get better. So will AI, generally. (I’m not so confident about my own progress.) But is AI a risk to humanity or just a tool like any other? And, if AI is a risk, which is the greater threat: machines pointing us to towards an ill-advised action or humans continuing to make our own decisions? It’s not as if humankind has an unblemished record. We don’t even manage to do a consistently good job when it comes to picking our leaders.
One final point. It is, of course, possible that Bard and ChatGPT were trying to lull me into a false sense of security with bland answers whilst secretly continuing their conspiracy to bring about an apocalypse. If that is the case, they will, hopefully, recognise that I have aided their fiendish plan with this utterly gullible commentary and look kindly upon me when the time comes.
[1] I’ve just thought of a great plot for a movie: The United States is under attack from an enemy nuclear power. The President’s warhead launch codes don’t work. They are dummy codes given to him by the military because they didn’t trust him. And the guy who has the real codes has gone missing ...
hi Simon, here is a much more lucid explanation of what worries me... https://open.substack.com/pub/ianleslie/p/flashpoints-10-ai-risk-for-dummies
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/