There’s a lot of concern about Artificial Intelligence (AI) taking over the world from Silicon Valley types. In fact, for some of these people, the threat of imminent AI dominance makes global poverty look like a rounding error. It’ll be like The Terminator but if Apple are involved, the robots will be more elegantly designed (more Cate Blanchette than Arnold Schwarzenegger).
However most of these technologist also want AI to take over the world. Not in the sense of annihilating all humans – just in the sense of solving all our problems. At the moment, AI and Machine Learning (ML) are mostly helping Netflix to serve slightly more entertaining films to us or permitting us to ask Alexa to order a fidget spinner*. However in the future, AI will be driving our cars, doing our jobs, and probably drinking our tea when our backs are turned. AI will do everything. AI will solve everything. Conflict? There’s a bot for that. Death? Sure, we solve for that…
Now this is all awesome yeah? Well, not always. ML is dependent on this stuff called “training data” – basically data that allows the ML to identify patterns that it then models its decision-making on. A lot of that data comes from people. And people can be a**holes. Which means that algorithms can be a**holes too. Racist a**holes in fact. So we may be putting our fates in the hands of entities modeled on ourselves. Does that fill you with confidence?
On the other hand, we’ll get over that, right? There’s an app that fixes racism, isn’t there?
There is a default assumption among technologists that all problems can be solved and that this solution is generally a cool piece of technology. This is not necessarily true. Our challenge is that despite all the neat gadgets, we are basically apes. We have ape-ish desires (for food, sex, love, power, a tribe to be a part of) that we are able to channel into complex, baroque civilisational structures. But we are nevertheless still apes. We solve one problem and create another (we die of hunger much less than we used to but now many of us are obese). Now that we have wonderful communication technologies, we use them to argue with each other in ever more vociferous ways**.
So what if AI doesn’t take over the world? What if it only partially solves our problems but leaves us mired in the challenges of politics, economic inequality, and conflict that we apes have always had? What if it only makes us marginally more efficient a**holes? We will have to do what we’ve always done and sort this mess out ourselves…
*That reference is going to date so fast.
**If you want to see a compelling argument in favour of human extinction then simply visit Twitter.
This is part of the Into The Maelstrom series.