A chilling letter, signed and published by 1,000 computer scientists and technology experts on March 28, warned of “profound risks to society” posed by artificial intelligence (AI). Their admonition was issued weeks after ChatGPT shocked the world with its verbal aptitude followed by GPT-4 that frightened the experts. The thousand technologists called for a six-month pause on AI projects to allow industry to devise safety controls. They claimed artificial intelligence labs were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” This was widely reported, but the same warning was issued in 2015 by the Future of Life Institute and ignored. This time, Eliezer Yudkowsky, who founded this field known as “general artificial intelligence”, refused to sign the letter and wrote that drastic action was needed immediately. “We need to shut it all down. The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
Some label Yudkowsky an alarmist or an “AI Apocalyptic”, but many agree (including myself) that the unbridled development of AI is a graver threat to humanity than the proliferation of nuclear weapons or the next pandemic. This is because cyber-based super-intelligence, by definition, will surpass humans in a generation then become unmanageable, will replace most workers, and will be used for nefarious purposes. When this happens, mankind will find itself in the same position as Dr. Frankenstein, in the classic 19th century novel, who made a monster in his own likeness and declared famously “it’s alive!”, then realized it was thoroughly uncontrollable.
ChatGPT is a wonder and can answer any question within seconds, in sentences or rhyme, by instantly tapping into data bases then formulating readable answers. It can write code, essentially producing new cyber-based super-intelligence. This “thinking machine” already tests higher than 90 percent of humans on a SAT test, higher than would-be lawyers on the LSAT, and aces the Uniform Bar exam. This “reasoning machine” can search, pattern match, and deduce, but cannot feel. In a few years, AI will upend the world by replacing anyone who writes or speaks words or code to make a living, mathematicians, artists, analysts, doctors, teachers, and engineers. Artificial intelligence will no longer be a tool. It will be the toolmaker.
Even Sam Altman – CEO of OpenAI that developed these CHATbots — admitted that he has concerns. “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks…but it could be the greatest technology humanity has yet developed”.
But Yudkowsky goes further. He is the head of research at the Machine Intelligence Research Institute in Berkeley California and explained the dangers. “To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world, you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to post-biological molecular manufacturing.”
Put another way, AI will be able to reproduce.
Currently, development is in the hands of the private sector, concerned with profits, not safety. OpenAI, the creator of these CHATbots, began as a non-profit research entity a few years ago, and was co-founded by Elon Musk. But there was a falling out and the research organization became a corporation that has raised billions to build its technology. Now billions more are being raised by start-ups from venture capital outfits aimed at quickly building even more powerful AI “machines”. This is why, after ChatGPT came out, Musk commented at a conference in Dubai that “one of the biggest risks to the future of civilization is AI.”
Unlike Dr. Frankenstein’s creature, these “thinking machines” are not yet self-aware, but only imitators. However, cautioned computer scientist Yudkowsky, “we do not actually know that. If you can’t be sure whether you’re creating a self-aware AI, this is alarming — not just because of the moral implications of the `self-aware’ part — but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.”
To date, no one has “programmed” empathy or “values” into these software creatures, or knows how to do so, which is another reason why their development must be arrested immediately, he added. “It took more than 60 years, when artificial intelligence was first proposed and studied, to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of `not killing literally everyone’—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.”
In engineering terms, artificial intelligence is like a bridge that has been built to carry millions of cars for years, but its design and construction have not included guardrails or been subjected to rules, or regulatory oversight. And AI will become smarter than the humans who try to govern and regulate its operations, or who will be tasked with monitoring and policing it to insure it doesn’t go rogue. “We would need to be super-intelligent ourselves,” engineering professor Roman Yampolskiy told Popular Mechanics. “We are only able to speculate [or regulate] using our current level of intelligence.”
Put another way, it cannot be stopped in a few years.
Yudkowsky concluded: “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”
The only solution, he wrote, is to “shut it all down” until controls are in place. Then governments and institutions must create a global AI “non-proliferation treaty”. The nuclear version took 25 years to develop, but the world doesn’t have that much time today. Back then, no one could develop a nuclear weapon in their garage, but tomorrow’s AI cataclysm is just one mad computer scientist away from happening. Yudkowsky said a multinational effort must track and limit computing power and development of large-scale systems globally by private entities, governments, or militaries. Further, the agreement must stipulate that members “be willing to destroy a rogue datacenter by airstrike.”
The world is at a crossroads, according to Max Tegmark, President of the Future of Life Institute and a physics professor at the Massachusetts Institute of Technology. “It is unfortunate to frame this as an arms race. This is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.”
Max Tegmark says it does not matter who first develops intelligent self aware thinking machines. I disagree. It certainly mattered who developed and used the first catapults, trebuchets, cannons , firearms never mind atomic, chemical and biological weapons. Or, intelligent drones, planes and surface vessels.
If the Chinese get there first I fear that they will make mincemeat of our democracies never mind our Woke and Cancel cultures.
Nice but rather a thing these days..Wired..same story, different authors...if we are in existential danger it’s more that everyone simply shrugs because the future is so undeniably bleak, and there are so many human life ending issues coming at us..on the merit order of concerns..this is lower than others..