ChatGPT Identity Theft: When Software Starts to Dream

Photo of author
Written By Maya Cantina

ChatGPT stole his identity, says Sebastian Schnitzenbaumer. He wants to sue the AI ​​company. A case that points to the foundations.

Robot hand on piano keys

Their identity is especially important for musicians. A song gets onto playlists more easily if it has a good story behind it Photo: Xia Yuan/Getty

Sebastian Schnitzenbaumer was both shocked and amused when he asked ChatGPT about his own music project Belp. Because the artificial intelligence answered that it was not he, the musician from Munich, who was behind the pseudonym Belp, but a certain Marco Guazzone; and the AI ​​supported this statement with a cool black and white image.

This Marco Guazzone really exists, he’s an Italian musician whose mainstream bombast pop couldn’t be further removed from Schnitzenbaumer’s complex, anticipation-inducing techno tracks. Soon he stopped finding it all funny.

Belp is also part of the music collective Loose Wire Head Radio, ChatGPT also claims. Not a bad name for a music collective – but this time it was unfortunately made up. The 46-year-old DJ and electronic producer from Munich calls this false information ‘identity theft’. “And normally you would say: defamation,” says Schnitzenbaumer. “I am involved in a completely false identity there based on a lot of correct information. And no one likes that.”

He is considering a lawsuit against it OpenAI, the American software company, which is behind ChatGPT. Above all, Schnitzenbaumer wants to spark a debate about what artificial intelligence can and cannot do.

None of this is true

Schnitzenbaumer used ChatGPT 3, the free and currently most used version of voice AI, for his question. The newer paid version ChatGPT 4 already works better; for example, it reliably mentions Belp’s real name, but also lies here and there. Sometimes Belp is based in Berlin and uses traditional instruments, sometimes his music has meditative features. None of this is true.

Experts call this ‘hallucinatory’. The most prominent case to date: American law professor Jonathan Turley was falsely accused of sexual harassment by ChatGPT. The problem: These hallucinations are not so easy to recognize and even more difficult – in fact, not at all – to remove or correct. Because they are closely linked to the way we work AI programs such as ChatGPTwith their algorithm.

ChatGPT is a type of AI great language model, a large language model. The algorithm received millions of texts on the Internet. And now provides information with text modules by always searching for the next most likely word. The program cannot distinguish whether this word has anything to do with the truth or not.

Aljoscha Burchardt of the German Research Center for Artificial Intelligence in Berlin emphasizes this major language models are mainly aimed at creating new texts that sound as convincing as possible: “If you now say: Aljoscha Burchardt was a cyclist in France in the 1950s, then a plausible story is written about a cyclist in France in the 1950s. And those are not facts.”

Plausible explanation in the system

AI generated image of a man with a beard and glasses

AI image “Belp alias Marco Guazzone” Photo: ChatGPT

This is different for famous people about whom a lot of information can be found on the internet. “If you ask about Chancellor Olaf Scholz,” says Burchardt, “something more sensible will probably come up. But if you ask about people about whom there isn’t much public information, the system comes up with something that sounds plausible.”

Such hallucinations are especially fatal for musicians, says Sebastian Schnitzenbaumer. Because today their identity is more important than ever. “Musicians and artists,” says Schnitzenbaumer, “have learned through the devaluation of art over the past ten to twenty years that their brand and their identity are their most valuable assets.” their art continues to erode.

Nowadays it is often the stories behind the songs and albums that are crucial, the story of the artist. A song gets on the radio and in important playlists more easily if it has a good story behind it. This is also the reason that Schnitzenbaumer is considering a lawsuit against OpenAI. The non-governmental organization NOYB led by Austrian Max Schrems recently filed a lawsuit against this because the program provided an incorrect date of birth.

It is still unclear whether the lawsuit will be successful. Aljoscha Burchardt does not see OpenAI as a direct culprit in such cases. “The tool was used incorrectly,” he explains. After all, no one has claimed that AI programs can write truthful articles.

The truth suggests

Man wearing brown baseball cap and glasses

Belp, the real musician Sebastian Schnitzenbaumer Photo: private

He only sees possible legal action through the argument that OpenAI has not explained well enough how ChatGPT works. The US company “may be suggesting by the way the system behaves and how the system reacts that it is bringing out the truth”.

Straubing member of the Bundestag Erhard Grundl (Alliance 90/The Greens) sees the same thing. Grundl is head of the Culture and Media Working Group of the Green Party in the Bundestag and a member of the Commission for Culture and Media Policy. “I can protect his work,” Grundl says of Schnitzenbaumer’s case. “But the wrong information that will then go out into the world about him will be very difficult.”

The recently adopted AI regulations by the EU do have chatbots such as ChatGPT in mind, but do not apply in this case either. Grundl believes that more media literacy is needed. “We have to encourage that. That’s where politics comes into the picture.”

ChatGBT: False, misleading or derogatory

A request for comment from OpenAI on Sebastian Schnitzenbaumer’s allegations went unanswered. The situation is also sensitive in the US. The Federal Trade Commission (FTC) recently launched an investigation into OpenAI and sent the company a 20-page questionnaire. One of the questions concerns ChatGPT’s statements about real people that are false, misleading or derogatory.

So far, not much progress has been made in general media literacy when it comes to AI. Because many users use the programs as a search engine and source of information. “It’s super brutal,” says Aljoscha Burchardt. “We looked into this: users are also using this to learn about election issues and politicians. This has great potential to cause discord.”

Is the computer always right? This image is outdated. With ChatGPT the exact opposite happens

Moreover, artificial intelligence is developing faster than ever before. The world was simply amazed by ChatGPT 3, but now there’s a whole bunch of better voice AIs. “As we all know, AI has a different concept of time than we do,” says Erhard Grundl, referring to the regulations by politicians. “If something is decided today, it may already be outdated in many areas. So there must be constant evaluation.”

More media competence, more knowledge about what artificial intelligence can and cannot do, how it works – that is essentially what Munich musician Sebastian Schnitzenbaumer wants. Because we’re still stuck with an outdated view of artificial intelligence, he says. “Pop culture and science fiction culture,” says Schnitzenbaumer, “have often depicted computers, robots and androids in their infallibility: the computer is always right. With ChatGPT the exact opposite happens.”

We should learn that computers and software systems are extremely error-prone. “We have dream software here, and we just have to get used to software deliberately producing false results.” So should we talk about more or less dreamy AIs in the future? Or wouldn’t that be misleading in another way, because it’s far too humanizing?

No difference between fact and fiction

Should we even talk about artificial intelligence if the programs ultimately consist only of algorithms that cannot even distinguish fact from fiction in their own creations?

In any case, Aljoscha Burchardt no longer wants to talk about ‘hallucinations’. “If we all know that content generated by generative AI is not necessarily true, then you can simply say that this is an automatically or statistically generated answer.” Maybe that’s what’s needed: calm level-headedness when it comes to artificial intelligence. Also so that music is not devalued again by the digital revolutions of our brave new world.

Source link

Leave a Comment

link link link link link