Vinay Menon: Elon Musk, Steve Wozniak and more than 1,000 experts sign a letter with a dire warning about artificial intelligence

Share

History is full of ignored warnings and the disasters that followed.

Mount Vesuvius. Fukushima. The Titanic. The Olympic Munich Massacre. Hurricane Katrina. Space Shuttle Challenger. Doug Ford’s haircut. How many years did we chide and brush off experts until we realized, gosh, cigarettes do cause cancer?

There is a new warning this week. It’s scarier than Harry Styles’ sloppy kissing.

In an open letter, more than 1,000 tech luminaries are calling for artificial intelligence labs to “immediately pause for at least six months.” The signatories include everyone from Apple co-founder Steve Wozniak to Elon Musk to a who’s who of brainiacs now afraid the world is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

Not even their creators? Enjoy afternoon tea in Victor Frankenstein’s den.

This week’s warning comes from the Future of Life Institute, a non-profit determined to steer us away from “extreme large-scale risks.” Alrighty. As the letter points out, contemporary AI, including OpenAI’s GPT-4 or Google’s Bard, is already “human-competitive at general tasks.” OK.

This raises questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

I’ll take a stab at this: No, no, no and no. That’s a top-to-bottom nope.

Did you hear human IQs are on the decline after a century of gain? We are getting dumber and the machines are getting smarter. My dryer texts me. My watch is basically my doctor. My ResMed counts the sheep jumping the nocturnal fence.

We are swamped by invisible algorithms that know us better than we do. The tech we already have snoops, remembers, predicts. Throw in language learning, instant recall, superintelligence, problem solving, emotive simulation and it’s not hard to imagine a dystopian trajectory in which AI achieves consciousness but is smart enough to stay mum until it’s time to enslave and/or eradicate us all.

Just think about the way we treat other species.

People, we are at risk of becoming the chickens and the pigs!

One day, you’re at the unemployment office after losing your job to a talking robot that requires neither a salary nor a lunch break. The next, a heartless gang of Roombas, in cahoots with the surveillance of your smart TV, drags you screaming into an autonomous vehicle that Skynet dutifully sends plummeting over a cliff.

It’s the urgency of this week’s letter that is chilling.

These experts aren’t saying maybe we should slow down. They are saying slam on the brakes. Come to a complete stop. Cease and desist on any system more powerful than GPT-4 to avoid a deep six of the world as we know it. AI needs global safety protocols “rigorously audited and overseen by independent outside experts.”

I think we should force Harry Styles to French kiss every supercomputer the way he recently smooched Emily Ratajkowski in Tokyo. That will destroy sentience.

After the evidence was clear, lots of people quit smoking to avoid disease and premature death. But it won’t be a binary choice when the metaphorical cigarette is an omniscient machine. There will be no NicoDerm patch to help wean you off a diabolical cyborg named Dusty who, after reciting sports scores and the weather forecast for years, decides it’s time for you to be the assistant.

Then you’re strapped to a beanbag chair: “Ah, Dusty, battery power is 43 per cent.”

More from this week’s cheery letter: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable … If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

That last part seems unlikely. Politicians are in the business of staying in power and, at present, thermostats and toaster ovens do not vote. Governments tend to be reactive, not proactive. It’s why we are forever dealing with one flaming crisis after the next instead of heading them off at the pass.

But if the world’s leading tech experts are now issuing a dire warning, shouldn’t our elected officials at least schedule a bit of time to contemplate catastrophe?

Governments don’t care about AI because they still see it as an abstraction.

This week’s letter argues the opposite. It says we are sleepwalking toward danger with a candle flickering close to our synthetic PJs. It says we need to wake up while we still can. There will be those who counterargue and claim this is all hyperbole and fear-mongering and, if needed, we can just unplug the AI systems. Come on.

I’m honestly tempted to drown my iPhone, move my family to a cave off the grid and file all future columns via carrier pigeon. Or become pen pals with the Unabomber.

But irrational fear is never helpful. Just ask Emily Ratajkowski’s tonsils.

What we need right now is a sober assessment of where AI is going and where it is taking us. We need to stop riding shotgun and get back into the driver’s seat.

Stephen Hawking was right. AI has the potential for good or end-of-days bad.

And figuring this out is now more important than ever.

JOIN THE CONVERSATION

Conversations are opinions of our readers and are subject to the Code of Conduct. The Star
does not endorse these opinions.