Killer Robots

This post is taken from the afterword to my short story, The Writing Assistant which is included in my anthology, Hard to Forgive – 4 Women, 5 Tales. In The Writing Assistant, Alisha Solomons, an ordained minister is tempted by her writing software, with harrowing consequences. The short piece below speculates on how the gap between fantasy and our every-day reality may be closing faster and less comfortably than we think.

Killer robots!

I do give AI a bad rap in these tales – use it as a convenient literary bogey-man. But that doesn’t answer the real question – from long before Mary Shelly’s Frankenstein till years from now, in the future (I hope): what is humanity going to do about its technological creations when, if ever, they develop consciousness? Self-awareness? Individuality? Feelings! What will we do if they become like people?

We understand each other. We even (imperfectly) understand psychopathology – as in Hitler, or Idi Amin.

Of course, that’s not going to happen any time soon…

A reassuring thought, given Stephen Hawking, Bill Gates, Elon Musk and other luminaries’ concerns about the ‘existential risk’ posed by AI.

Or is it? Take a look at this report,Evolving Robots Learn to Lie to Each Other.

In 2008, researchers in Lausanne, Switzerland programmed ‘good’ robots to find and freely share a desirable ‘food’ resource. Within 50 generations their digital offspring had realized that there wasn’t enough to go round, and had learned to conceal their discovery of the ‘food’ by broadcasting false information.

Their digital ‘genome’ was minute, only 264 bits in size.

Well (you’ll probably object) learning to flash the wrong colour light when you find ‘food’ hardly amounts to consciousness. They didn’t evolve into persons.

Uh, huh… But their behaviour displayed some of the baseline characteristics of sentience: they were motivated, they were selfish (they had a ‘sense of self’) and they modelled their own behaviour and that of their competitors in order to achieve winning strategies.

OK, that’s a huge assumption: their behaviour was probably more akin to inherited instinct (like insect behaviours) than primate social modelling and self-awareness. But, can we be sure? Nobody knew then, and nobody knows now, what’s actually going on inside those little AI neural networks. The programmers know how the system works, but they don’t know the actual chain of digital decisions the network makes. So how can we be sure that the little robot isn’t learning to think like us?

Or, alternatively, how to think like something completely alien?

And, to speculate further, how do we know they don’t have ‘feelings’. Contemporary (and classic) thought about emotion recognises motivation as the prime mover of feelings. Humans have an elaborate plumbing system that squirts hormones in response to motivational challenge, and the result of that usually occupies centre stage in our meat-ware mediated awareness of our consciousness. But it’s just a mechanism. Without hormones, a thinking entity will still have motivations (like these little robots) and will ‘feel’ them – perhaps strongly. Just, in a very different way from animals and mammals, and us.

And that is scary.

We understand each other. We even (imperfectly) understand psychopathology – as in Hitler, or Idi Amin.

What about reptiles? Say we’re chatting to a lizard over coffee (this is SF, OK) – can we understand its reptile emotions?

And what about an intelligence far quicker than ours, and of much greater capacity, that has motivations and feelings that have no biological underpinning at all?

OK, I know my speculations are quick and dirty – I’m no scientist – but my common-sensometer is red-lining. You make something that is selfish and knows how to lie and you let it lose in the world. Sure, there are safeguards coded in, but…

Chernobyl!

Just one word. One technology – nuclear power.

Sure, we know that Covid 19 was not a human lab-error. Or deliberate bio-weapon.

Do we?

The point is – never mind what technology – those who promise safeguards…

Can’t really promise anything.

Serious stuff. And, getting back to AI, I suggest it’s already real, and getting realler. We (will) interact with it every second of the day over our computers and phones. And AR glasses, and whatever internet and communications technology the future brings.

I think Hawkings, Gates, Musk, et al are right to be very, very concerned.

P.S.

And what does all this burgeoning technology say to our conception of the soul?

We’d better start praying!

╬╬╬

Robot image by Dmitry Abramov from Pixabay