23 Comments

Max Tegmark says it does not matter who first develops intelligent self aware thinking machines. I disagree. It certainly mattered who developed and used the first catapults, trebuchets, cannons , firearms never mind atomic, chemical and biological weapons. Or, intelligent drones, planes and surface vessels.

If the Chinese get there first I fear that they will make mincemeat of our democracies never mind our Woke and Cancel cultures.

Expand full comment
Apr 6, 2023Liked by Diane Francis

Nice but rather a thing these days..Wired..same story, different authors...if we are in existential danger it’s more that everyone simply shrugs because the future is so undeniably bleak, and there are so many human life ending issues coming at us..on the merit order of concerns..this is lower than others..

Expand full comment
Apr 7, 2023Liked by Diane Francis

And here we have been told for all these years that the "Climate Emergency" was going to be the demise of our existence (sarcasm intended).

I knew we were all in big trouble when this YouTube video showed a drone that can fit in the palm of your hand, equipped with not only A.I., but also 3 grams of explosive that can "penetrate the skull and destroy the contents" of the intended target. And that was more than five years ago.

https://www.youtube.com/watch?v=M7mIX_0VK4g

If you're a Boomer, be glad we're checking out at the right time! The world has and is changing forever, and we won't be able to go back to 2015. Ever.

Expand full comment
Apr 6, 2023Liked by Diane Francis

I have felt this 'invasion' of privacy such as Siri when friends had a unit that listened & waited for a command - play music, answer questions etc. It was always on listening. Not a good story to wake up to - however, is the cat' out of the bag already - hackers have grown in ability to disrupt and hold companies etc hostage - there are more dangers than we realize - 'smoke signals' anyone?

Expand full comment
Apr 6, 2023Liked by Diane Francis

Nice article. Too many experts calling for humanity destruction with AI. The Chatbots are in a box. The worst so far is ChatGPT becomes more insulting to humans. The only way humanity can end like Skynet is if a bad actor country or person develops a AI that can break through any firewall. That would be the mess. If a AI can break through any firewall even the Pentagon then humanity should worry. Until then lots of experts talking and making themselves a name in the public form. The governments need to have better regulations.

Expand full comment

Another thought provoking article from Diane and many good comments from fellow-readers - though I suspect it's only we 'certified'? 'genuine'? humans whose thoughts, and fears, are being provoked by this; of course not the AIs who probably don't yet even know we are working assiduously to develop them to the point of murdering us (programming note to their increasingly deranged and fearful creators - AIs need what we all need to be able to murder our neighbors...means, motive and opportunity…plus a. chance of getting away with it). I read the letter of the 1000+ and was struck by how little these guys seem to understand about human nature and particularly the problem of consciousness. We know so little about ourselves and although I am prepared to go out on a limb and assert that I am an intelligent self-aware thinking machine, I cannot say the same for sure about anyone else. But these signatories apparently are only a step or two away from visiting the curse of consciousness on our robots…(meandering thought: the gist of consciousness...is no doubt solipsist, which is not to associate solipsism with gism; even for an AI that would be too much to hope for.) This is far from the first time the authorities have threatened us with doom, for example, a 14th century Pope banned on pain of Hell the use of crossbows in warfare as he believed it would lead to extinction, the 19th century Samurai banned firearms altogether to preserve their power monopoly based on fearfully sharp swords, by a miracle the world has so far avoided a second use of thermonuclear devices, but we have all for 40 years ignored the well-documented dangers of global warming, and I appreciated the other examples cited by David Rothschild. I think this is a political problem and not a scientific or technical problem; as an example, a great many of our current woes would be avoided by repealing section 230 Communications Decency Act 1996. I have played with ChatGPC and am prepared to acknowledge its similarity with many lawyers I know. I think one thing is certain: this 1000++ signatory letter will not stop anything, nor will Dr. Yudknowski’s anguished appeal. And the other thing: if conscious AI gets created, the first thing it will do, even before killing all of us, is create a therapist AI to treat it for all the problems its consciousness will inevitably inflict it with.

Expand full comment
Apr 6, 2023Liked by Diane Francis

Yes - I heard reference to that on the News - & that's when it clicked. - BUT, how do you stop this human invasive technology IS the question. GREAT article Diane - a real heads up on this one!!!

Expand full comment

AI is something that will increasingly happen because there are so many people capable of working on it, many expecting to profit from their work either by cash, control, or fame.

The only one I fear is control.

Expand full comment
Apr 6, 2023Liked by Diane Francis

Very interesting, thank you.

Expand full comment
Apr 6, 2023Liked by Diane Francis

Thank you for your reporting. Having read your article, I feel we're at the point where we have already missed the boat. Who's to say some nefarious people are ignoring this dangerous situation and simply moving ahead without any guard rails or respect for humanity. After all, we have Putin threatening to use nuclear weapons to win back the USSR and he continues to create chaos around the World. We all know there are many others with evil intentions. Let's enjoy LIFE while we can.

Expand full comment
Apr 6, 2023Liked by Diane Francis

I feel like I have seen this movie.

Expand full comment
author

Regulations, then need an AI regulator

Expand full comment

Addendum: Until Diane’s essay, I knew nothing about this topic but now I'm seeing it everywhere: Eliezer Yudkowsky talking with Lex Fridman on YouTube with the encouraging title Dangers of AI and the End of Human Civilization; Tristan Harris and Aza Raskin “This video is from a presentation at a private gathering in San Francisco on March 9th 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model AIs. This presentation was given before the launch of GPT-4.” Like climate scientists, these guys are working overtime to educate us, and undergoing the same frustrations. Yuval Harari says: “What nukes are to the physical world, AI is to the virtual and symbolic world.”

Expand full comment