The development of artificial intelligence polarises opinion. As much as we’re thrilled by the prospect of a science fiction future of flying cars, self-healing pods and casual space travel, we’re fearful of intelligent beings that could make humans obsolete, ex-Machina style.
It’s clear AI will drive radical change in the domestic space and far beyond, but this is a complex topic with colossal implications. The leaders we partner with are working through them, and we at Wolff Olins are looking for own position in the space.
We’re playing God, in the sense that we’re creating and evolving machine life, and we must proceed with care. In the words of historian Yuval Noah Harari:
‘History began when humans invented gods, and will end when humans become gods.’
Dystopian alternative realities make gripping movies, but this isn’t the conclusion any of us wants. And it’s not just a matter for script-writers. The World Economic Forum has identified the top ethical issues in artificial intelligence, and ‘control of a complex intelligent system’ is up there.
Nick Bostrom, an Oxford philosopher whose book Superintelligence: Paths, Dangers, Strategies has been recommended by Bill Gates and Tesla’s Elon Musk, believes the imminent creation of general machine intelligence is the biggest threat to our existence. It far outstrips, he says, the risk of climate change, pandemic or nuclear warfare.
So who’s going to save us? How do we navigate this brave new world? We don’t rely on traditional establishments any more. Recent political events on both sides of the Atlantic have shown that age-old institutions are being flipped on their heads.
In this specific context, we actually can’t rely on governments. They don’t have the capacity to be guardians of our safety, because they’re plagued by bureaucracy and aren’t driving technological change at pace. The big tech companies are.
They dominate the economy. ‘Fangs’ (Facebook, Amazon, Netflix and Google) have ballooned in value by $250bn in just four months since January, which is about the annual GDP of Ireland. With this power comes responsibility, and businesses that answer to shareholders, not voters, are getting to grips with their new obligation.
Naturally they’re not getting everything right – Facebook’s role in fake news and Uber’s destruction of a trade spring to mind. But they are taking steps forward. The Partnership on AI, a collaboration between Amazon, Apple, IBM, Google and Microsoft, is encouraging and seems to be aligning with policy-makers in the right way, expressing support for White House-led research at the end of last year.
It’s wonderful that progress is happening at the very top levels. As professionals in this area, as consumers, and as humans, naturally vested in the future of our race, we all have agency. Tech is not and has never been neutral, and we’re architects of the future, whether we like it or not. Though we can’t give machines consciousness, we can be conscious of our actions. We can apply our creativity, our optimism and our hearts to our work and choices.
Of all, we need to be mindful of the role we will play. In 1961, the Jewish political theorist Hannah Arendt covered the trial of Adolf Eichmann, the chief technocrat of the Holocaust. He was given the ‘task’ of making the trains run efficiently’. In fact, he was being asked to operationalise the mass destruction of human beings. How, he begged the judges, could he possibly have known?
For Arendt, this proved that Eichmann’s worst failing, and the substance of evil itself, was the absence of thought. My invitation to all of us is to be thoughtful about the journey we’re on.
(This piece has been adapted from a talk Sairah gave at Collision earlier this month.)