European regulators and law enforcement agencies are concerned about the hazards posed by generative artificial intelligence (AI) systems like ChatGPT.

ChatGPT, which responds to user queries with essays, poems, spreadsheets, and computer code, had over 1.6 billion visits since December with little restrictions. At the end of March, Europol, the European Union Agency for Law Enforcement Cooperation, warned that ChatGPT, one of thousands of AI platforms, can help criminals with phishing, malware, and terrorism.

“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored,” the Europol report added. “As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, from how to break into a home to terrorism, cybercrime and child sexual abuse.”

After a bug revealed user files, Italy temporarily banned ChatGPT last month. The Italian privacy rights board Garante warned OpenAI with millions of dollars in fines for privacy infringement unless it addresses user data privacy and age limitations. Spain, France, and Germany are investigating personal data violations, while the EU’s European Data Protection Board launched a task force this month to coordinate legislation throughout the 27-country EU.

“It’s a wake-up call in Europe,” EU legislator Dragos Tudorache told Yahoo News. “We need to know what’s going on and how to set the rules.”

ChatGPT, an interactive “large language model” that answers questions and completes tasks in seconds, has shown the potential of artificial intelligence, which has been part of daily life for years.

“ChatGPT has knowledge that even very few humans have,” said Mark Bünger, co-founder of Barcelona-based science-based innovation consultancy Futurity Systems. It programs computers better than most humans. Thus, it will likely program the following, superior version quickly and well. That version will be even better and program something no human understands.”

Experts warn the technology allows identity theft and school plagiarism.

“For educators, the possibility that submitted coursework might have been assisted by, or even entirely written by, a generative AI system like OpenAI’s ChatGPT or Google’s Bard, is a cause for concern,” Nick Taylor, deputy director of the Edinburgh Centre for Robotics, told Yahoo News.

For this report, OpenAI and Microsoft, which has financially funded OpenAI but developed a competing chatbot, did not answer.

“AI has been around for decades, but it’s booming now because it’s available for everyone,” said Futurity Systems CEO Cecilia Tham. Tham said programmers have been adapting ChatGPT to create thousands of new chatbots, from PlantGPT, which monitors houseplants, to ChaosGPT, which is “designed to generate chaotic or unpredictable outputs” and “destroy humanity.”

AutoGPT (Autonomous GPT) can perform more complex goal-oriented tasks. Tham continued, “You can say ‘I want to make 1,000 euros a day. How can I do that?—and it will work out all the intermediate steps to that aim. I want to kill 1,000 people. Give me every step.” She says that “people have been able to hack around those” ChatGPT model data constraints.

The Future of Life Institute, a technology think tank, published an open letter last month urging a halt to AI development due to chatbot and AI risks. It was signed by Elon Musk and Steve Wozniak and stated that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

The signatories requested a six-month freeze on the development of AI systems more powerful than GPT-4 to allow for regulation, and they encouraged governments to “institute a moratorium” if industry leaders would not comply.

EU parliamentarian Brando Benifei, co-sponsor of the AI Act, mocks that. he told Yahoo News. He added, “We also need a global debate on how to address the challenges of this very powerful AI.”

EU AI legislators launched a “call to action” this week asking President Biden and European Commission President Ursula von der Leyen to “convene a high-level global summit” to establish “a preliminary set of governing principles for the development, control and deployment” of AI.

Tudorache told Yahoo News that the AI Act, scheduled to be passed next year, “brings new powers to regulators to deal with AI applications” and allows EU regulators to levy large fines. The law also risk-orders AI operations and bans “social scoring,” a dystopian monitoring mechanism that would grade nearly every social contact on a merit scale.

“Consumers should know what data ChatGPT is using and storing and what it is being used for,” said BEUC deputy head of communications Sébastien Pant to Yahoo News. “We don’t know what data is being used or if data collection is legal.”

Despite worries highlighted by FTC Commissioner Alvaro Bedoya that “AI is being used right now to decide who to hire, who to fire, who gets a loan, who stays in the hospital and who gets sent home,” the U.S. has yet to regulate AI.

“It remains to be seen—could be,” Biden said when asked if AI was harmful.

Gabriela Zanfir-Fortuna, vice president for worldwide privacy at the Future of Privacy Forum, told Yahoo News that consumer data protection attitudes have differed for decades.

“The EU has placed great importance on how the rights of people are affected by automating their personal data in this new computerized, digital age,” Zanfir-Fortuna stated. She said Germany, Sweden, and France passed data protection laws 50 years ago. As there is no federal data protection statute, “U.S. lawmakers seem to have been less concerned with this issue in previous decades.”

Gerd Leonhard, author of “Technology vs. Humanity,” and others fear about what will happen when the military, banks, and environmentalists employ ChatGPT and more advanced AI.

“The AI community jokes that if you ask AI to fix climate change, it would kill all humans,” Leonhard remarked. The most logical explanation is inconvenient for us.

    Leave a Reply

    Your email address will not be published. Required fields are marked *