Alex Garland is no stranger to science fiction. As the writer of 28 Days Later and Sunshine, he's given us his own unique spin on the zombie apocalypse and a last-ditch effort to save the Earth (by nuking the sun!). Now, with his directorial debut, Ex Machina, Garland is taking on artificial intelligence -- and in the process, he shows the limits of the Turing test, the most common method for determining if something is truly sentient. The film centers on a young programmer who's sent to his genius CEO's isolated compound to test his latest invention: an artificially intelligent robot. Things, as you can imagine, don't go as planned.
I spoke to Garland ahead of the film's US premiere at the SXSW festival. And, as you'll quickly learn, he's got a lot to say about AI and the nature of consciousness. Ex Machina hits theaters on April 10.
What was your creative spark for the story? Were there any ideas for the story? This one just seemed a little different for AI, more of an exploration of how you could love an AI.
Going way back, for me it would go back to childhood ... and having very, very simple home computers.
What was your first home computer?
I think the first actual computer [for me] was the ZX81, but the one I actually got to grips in any way as the ZX Spectrum, which was a follow-up to that. ... It had BASIC commands sort of burned into the keyboard that you could access through shortcut keys.
So I would write very, very simple ... "hello world" kind of programs, where the computer would maybe have the ability to answer three or four questions. ... It would give you this feeling as a kid of bolting up and thinking, this is suddenly feeling sentient. Of course it's not sentient -- you're very aware it's not, because you've programmed it. But nonetheless it gave you that sort of funny electric feeling. I think that always stayed with me.
Then years and years later I got involved in a really long argument ... with a friend of mine whose principal area of interest was neuroscience. He's affiliated with a school of thought that basically says, "Machines are never going to be sentient in the way we are." There are very serious thinkers attached to that, like Roger Penrose [a renowned physicist and philosopher]. And I instinctively disagreed with this, but I didn't have the sort of armory to disagree with it on his terms, so I started reading as much as I could.
Not to get too boring, but our sticking point was over qualia [a philosophical term for defining conscious experiences we all have]. ... I had a kind of instinctive, and then subsequently I'd say rationalized, sense that qualia might not even exist.
You mean even in human language and thought?
Yah, yah. ... In reading about this, I came across this book by [cognitive roboticist] Murray Shanahan, who wrote a book about consciousness and embodiment. Within it there's a really beautiful idea, which is an argument against metaphysics [the philosophy branch that explores how we experience the world] in terms of the mind. And in a way, if you can get rid of metaphysics as a problem, it allows for an artificial intelligence -- or a strong AI. ... And I have to say, as I was reading that book -- and as a layman, it's fucking difficult for me to read this stuff; I struggled like crazy -- but it was while reading it that the idea for this movie appeared in my head.
I was in preproduction on Dredd; I was spending a lot of time in South Africa, on planes, and I had a lot of time to read. ... So whenever we shot Dredd, the three months before that, that's basically where it came together. And after I wrote the script, I sent it to Murray Shanahan and said, you don't know who I am, but I want you to really look at this script. I wanted the stuff within it to be reasonable.
"There's a lot of legitimate reasons to be scared of superintelligence."
So what did he think?
There was a bit of coding in there where he said, "I'm not sure it's compiled correctly." In fact, in the film some of the code that appears on screen, if you were to write it out, what you'd find is it leads you to the ISBN of his book.
What other sorts of AI books have you read?
I pretty much would read everything I could. I tried to read people like Penrose, who were arguing against what I instinctively believed.
That's a good way to solidify your argument.
I don't want to dignify it from my point of view, because I can't stress enough I'm a real layman. So I can understand the principles of an argument, but when it comes to the actuality of what people like Demis Hassabis at DeepMind [a British AI company owned by Google] are actually doing, I really don't understand it. That's part of what interests me. It's the gap between people who do understand what they're doing, and the way information disseminates, and the confusion that exists between those spaces.
The thing I began to get fixated on -- this is separate from the film, but related -- was confronting my own intellectual limitations. And thinking, here I am; I'm not dumb, I'm doing what I can, but I'm also running into a brick wall of my own intellectual capacity. And it sort of made me a bit nervous in some respects, because I was thinking, that means a lot of this stuff, for a lot of people, and I would include myself in that part of the Venn diagram, becomes articles of faith.
That's troubling where there are ethical consequences to the things that are happening, because to get through the ethics, you have to understand it.
What do you think about the rise of AI and superintelligence? Do you think it's something we need to worry about?
I think there's a lot of confusion around this. I mean, a few months ago it was announced that a machine passed the Turing test in a way that was completely ridiculous; it didn't stand up to any scrutiny at all, but the fact that it was reported as widely as it was tells you that what kind of problem there is in terms of what people are actually doing, and what they're perceived to be doing.
There's a lot of legitimate reasons to be scared of superintelligence; of course there are. And exactly as there are reasons to be scared of the implications of nuclear power, hence the [Robert] Oppenheimer analogy that exists throughout the film. Both of them are potentially dangerous. From my point of view, I would see myself as being sort of green in terms of my outlook, but I'm also in favor of nuclear power. So I would see the superintelligences as being analogous to that. They could be problematic if they're controlling drones and making kill decisions over humans; I can completely see that.
I think that probably people get a bit confused about them, and maybe a film like Ex Machina in some respect doesn't help, in as much as we think that they might be human like. But they probably won't be, in terms of how they see the world, and the way they interact with it and each other.
It's completely alien to us.
Yah we don't really know because they're not here yet. And as a father, it's a bit like trying to conceptualize your child before they're born. You can't actually do it. It's an abstract thing until it arrives. And then, of course, you can get your head around it. ... If you get an AI that was human-like, and it has similar things to our consciousness embedded within it. There's nothing within that, that I find necessarily frightening.
"Being clear about these things is important. Otherwise, you'll be talking about the sentience of Siri. And Siri doesn't have any fucking sentience."
A human intelligence is capable of being fantastically dangerous if it's given the life path and powers of [Joseph] Stalin or Pol Pot. And I could say if you provided an AI with too much power, you might get serious problems. But what that would be is an argument in favor of checks and balances, rather than an argument against AIs.
I think it's helpful and actually sensible when talking about this stuff to be clear what it is one's talking about. AI's a super-broad term. I play video games with AIs. And in the same way that the reporting of the passing of the Turing test would misunderstand A) whether it had been past and B) what the Turing test was, and also, C) whether the Turing test is actually an indicator of consciousness in a machine, or whether it's just an indicator that the machine passed the Turing test. And I would say it's the latter. ... It doesn't tell you whether you're sentient.
Being clear about these things is important. Otherwise, we're very quick to conflate stuff, and suddenly you'll be talking about the sentience of Siri. And Siri doesn't have any fucking sentience. AI is probably too broad of a term to be useful at the moment.
This interview has been condensed and edited.
[Photo credits: DNA Films/Film 4 (Ex Machina set); Jean-Christophe Verhaegen/AFP/Getty Images (Alex Garland)]
0 comments:
Post a Comment