ost of us will have heard of the Doomsday Clock, which measures how close humanity is to global nuclear disaster – and which currently stands at a terrifying 90 seconds to midnight.
Now IMD Business School in Lausanne has launched the AI Safety Clock. Its job is to alert the world to the potential dangers of uncontrolled artificial general intelligence, based on its degree of sophistication, its ability to function independently, and its ability to integrate with and adapt to real-world systems without human oversight.
Don’t panic – the hands of this AI Safety Clock are not ticking dangerously close to midnight – they’re sitting at a more comfortable 29 minutes. But what would happen if it were to strike midnight? Apocalypse or not? Nobody knows.
Elon Musk believes there is a “20% to 30%” risk of an “existential AI catastrophe” for humanity (and seems intent on making it happen). Experts are raising the alarm, as IMD reports: “Geoff Hinton assigns a 10% chance of human extinction within 30 years without strict regulation, Dario Amodei and Yoshua Bengio estimate 10-15% and 10%, respectively, for a civilisational catastrophe, while Ray Kurzweil predicts human-level intelligence by 2029.” The risk is escalating and nothing is being done to bring it under control. Yikes!
The AI Safety Clock, like any clock, is a measuring instrument that must be precisely regulated, rather than relying solely on forecasts. To accurately calculate the rate at which its hands progress, it must be equipped with some form of regulating mechanism, which is IMD’s objective. Exactly as with a watch movement, stored energy must not be released unchecked – or the hands would whirl around at breakneck speed – but literally “brought into step”.
From its current “time” of 29 minutes to midnight, there is a very real possibility that IMD’s clock will advance at a much faster pace, as we effectively implement systems capable of planning, making decisions or taking action autonomously, without human intervention (as illustrated, for example, by certain recent developments in weaponry). Hence the critical role of regulation. As IMD states, the AI Safety Clock “evaluates global and national regulatory measures (…) to identify areas where regulations successfully mitigate risks and where further efforts are required” with the aim of alerting policy-makers so that the clock’s hands never spin out of control.
Stupid question: could we not regulate the AI Safety Clock using AI? Having surpassed human intelligence, perhaps it would spare us from extinction by committing suicide? Is artificial empathy a thing? I have my doubts.