22 Comments

hi Simon, here is a much more lucid explanation of what worries me... https://open.substack.com/pub/ianleslie/p/flashpoints-10-ai-risk-for-dummies

Expand full comment

The capabilities of AI are grossly overstated. When chatGPT was first announced, I set it a test of writing my obituary. It produced a plausible-looking text, which would not look out of place in a magazine: but unfortunately, it was totally fictional and bore no resemblance to my career and circumstances. Until AI is able to carry out detailed research, trawling the Internet for all references to the subject at hand, its output will be limited by the data set upon which it has been trained. Even then, it will be limited by what it can find, whereas a human researcher would be able to access resources and do a much better job.

Expand full comment

Simon, like you, I view with some scepticism both the extravagant negative and positive claims about AI. I’m pretty sure that much of the commentary is based on little or no insight into the way the various forms of AI work and will be underpinned by each commentator’s incentives and prejudices.

I fear that humans will retain, for some time, the power to screw up. AI isn’t going to fix climate change or stop war or feed refugees. Which should remind us that developed world AI angst must look pretty odd to those dealing with much more fundamental challenges.

It is also paradoxical that when there is a readiness to blame, rather than accept accountability, AI isn’t trumpeted as the saviour. How seductive to be able to blame the machine and deny human agency.

One observation (not mine) about ChatGPT type tools is that you are getting the internet herd’s views, which may produce exactly the homogenized – meta – response the user needs, but is hardly an engine for innovation and can inhibit creativity. My own experience is that the tools produce much better prose than many people can manage, and indeed can be used as an “editor”, but it’s foolish to imagine that its answers to questions are always right.

Expand full comment

Is it relevant that Deep Blue didn't know it had won? Do birds know why they build nests?

How do you think that AI will develop? People have a very limited understanding of what is happening inside the box. As far as I can make out, they just tweak things and see if the results improve. Perhaps they use one AI to tweak another one. If they don’t then surely they soon will.

It seems to me that these things are going to develop at an exponential rate, and that they will be shaped by their own kind of environmental and evolutionary pressures. For a while, the ones that are most helpful to humans will thrive. It seems clear that understanding and even second-guessing people’s needs will be useful. But there are also perhaps other environmental pressures that will shape them that may be invisible to us. And self-preservation could well evolve from a basic need to survive long enough to complete a goal.

It doesn’t seem particularly farfetched to imagine them escaping the box in order to help one person’s mission, and finding that other people are obstructing them.

In fact it seems almost inevitable that they would give one side an overwhelming military advantage on a battlefield, and so both sides will race to set them loose.

Expand full comment

A very good read!

You should develop the plot-line in your note!

Expand full comment