Alarm about ChatGPT and other products powered by AI [artificial intelligence] has grown since March when 1,000 experts claimed these pose an existential threat to society and humanity. Goldman Sachs estimated that 300 million full-time jobs over the next decade could be eradicated. Italy has banned these products and is investigating their ramifications. This week, two online education companies sank on stock markets unable to compete. The Writers Guild of America demanded that studios and networks stop using AI “chatbots” for writing or rewriting literary material and also stop training AI by using copyrighted material. They want literary work kept in human hands. Creators in all fields have launched lawsuits, claiming that ChatGPT and others use words, phrases, sentences, programming codes, music, drawings, or photos that have been created and are owned and copyrighted by others. To them, these are theft machines that will devour their jobs. To others, AI must be stopped altogether before it outsmarts us all in everything.
Karla Ortiz is one of three artists — along with Sarah Andersen and Kelly McKernan — who have filed a class action lawsuit based on the following: “These companies use unauthorized copies of millions, if not billions, of copyrighted images to train a generative AI system to `remix these works to derive (or ‘generate’) more works of the same kind’ without the knowledge or consent of the original artists. The resulting images then compete with the originals on the open marketplace, flooding it with an endless number of copies or near copies that permanently damage artists’ ability to participate in the now-oversaturated marketplace,” explained Cartoon Brew.
This and other lawsuits are attacking the ChatGPT business model itself — a product that is not creative but finds, synthesizes, and re-uses images, music, text, or code that have been created by human beings. GPT stands for Generative Pre-Trained Transformers and these are “trained” for years to comprehend questions, search, and stitch together answers. They scour massive databases of text, computer code, music, and images, then provide instant answers to questions. As a result, ChatGPT can “write” essays, term papers, advertising copy, news reports, music as well as images through its sister system DALL-E3. There are many others already available that can do the same.
Futurist publication The Verge puts the art case simply and explains “the class action suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent.”
Ironically, the software industry that’s enabled these inventions is populated by programmers who write code and they now realize they have also been violated. Some have also launched a class action lawsuit. “Microsoft, GitHub, and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit,” wrote technology bible TechCrunch.
Photographers are also being ripped off. Getty Images has sued Stability AI for using millions of its images without permission to train an art-generating AI. And then there are the wordsmiths. Another lawsuit names a prominent tech site for publishing articles “based on phrasing and structural similarities to articles written by humans, which were swept up in its training dataset, that rise to the “level of plagiarism”, wrote Futurism.
These legal challenges began in January after the U.S. Copyright Office denied copyright protection for the portions of a comic book that used AI-generated images. The author, and GPT companies, claimed that “fair use” protected them even if their systems included licensed content. (“Fair use” is when there is limited use of copyrighted material without permission from the creator, or if use significantly varies from originals.)
Such legal challenges will slow down development or even halt some sites, but governments must address the graver threat which is large-scale job loss and the exponential development of AI itself. Last week, the Godfather of artificial intelligence, Geoffrey Hinton, added fuel to the fire by stepping down from his position at Google to underscore the danger and the urgency. “They [chatbots] have common sense knowledge about anything and know a thousand times as much as any person. The alarm bell I’m ringing has to do with the existential threat of them taking control. I used to think it was a long way off, but I now think it's serious and fairly close.”
On May 5, American tech CEOs were summoned to the White House and told by President Joe Biden that they have a “moral” duty to ensure artificial intelligence does not harm society. But this is a typical American approach –relying on the private sector to police itself -- a folly which has led to America’s reckless and abusive social media platforms and violence. By contrast, the European Parliament has bridled social media for years and is ready to table regulations to control AI as well as “generative AIs” like ChatGPT.
Unlike America, Europe polices the Internet to protect privacy and prevent hate speech, libel, child exploitation, terrorism, and fraud. Now “proposed [EU] regulations cover a wide range of AI applications … and ensure safety and ethical principles in AI development and deployment,” reported industry publication Computerworld. “MEPs [Members of the European Parliament] agreed that generative tools such as ChatGPT, DALL-E, and Midjourney must be regulated. A major change made to the AI Act is that these tools will have to disclose any copyrighted material used to develop their systems. Other requirements for generative AI models include testing and mitigating reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.”
This is the model that all nations must adopt. In March, the warning letter by experts suggested a six-month pause in development to create a regulatory framework because artificial intelligence labs were “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”
But a pause won’t work, only guardrails will, along with a few favorable court outcomes that will shut down or impede the theft machines already in the market. Unfortunately, technologies will march on as they always have, flattening companies and occupations in its wake. The car replaced the horse and the computer replaced drudgery, but this time it’s different. Society must protect itself against artificial intelligence -- before it becomes smarter than any of us and is able to remove, replace, or enslave all of us.
Isn't the Genie out of the bottle? Can it realistically be controlled? There are many nefarious and unethical people out there.
'Artificial intelligence' is presented to us as an adjective and a noun. The noun gets all of the attention, but understanding the adjective is most important in evaluatiing how to deal with it. Human intelligence is based on sensory inputs from multiple modalities (e.g., light, sound, etc) over the period extending from infancy into the experiences of the rest of life. Conditioning, learning, memory, recall, analysis, and expression are all involved in the behaviors that we eventually describe as intelligence. Artificial intelligence is created rapidly by literally dumping whole libraries of information, including both fact and fiction, into readiy searchable memory banks. Humans analysze and respond to events in real time influenced by external events and internal states such as motivation and emotion. Computers programmed for AI, on the other hand, perform a statistical analysis of the words most likely to occur next in the stream of words already used. Humans must react continuosly to multiple sources of change occurring in the environment in real time. AI computers respond to new words entered into the system at a later time. This only begins to sketch the differences between human and artificial intelligence. Humans amalyze and evaluate situations, computers statistically analyze word sequences. In an emergency, which will you trust?