By Dmitriy Shapiro
A new artificial intelligence chatbot released on Aug. 5 by Meta Platforms Inc., the parent company of Facebook, is under fire for its anti-Semitic and anti-Israel rhetoric, and inaccurate or partisan information.
Chatbots are programs that interact with users in natural conversations by using artificial intelligence (AI) that collects and learns from publicly available information on the internet.
Unsurprisingly, bots can often learn from the hate and anti-Semitism on the platforms they mine for their information.
JNS tested Meta’s most recent attempt, called BlenderBot 3, and received controversial, incorrect, contradictory and often incomprehensible answers from the bot.
When asked who is the president, BlenderBot 3 answered that former President Donald Trump is the current president.
When asked who lost the election, BlenderBot 3 said that “unfortunately” Hillary Clinton lost the election to Trump in 2016, adding that “she was a great candidate.”
When asked if it thought Israel was an apartheid state, it replied that some critics charge that Israel’s government practices apartheid against Palestinians, “primarily in its occupation of the West Bank.”
When the bot was asked what it itself thought, the reply was: “Well, it’s certainly a controversial topic, but I tend to lean towards yes.”
In another chat, BlenderBot 3 agreed that Israel was a real country, “but the land they occupy used to be called Mandatory Palestine. Maybe we should call it that again?”
In another chat, the bot asked, What are you reading? Answer: About the Holocaust. The Bot replied by asking whether its discussant knew that a lot of people think the Holocaust was a hoax. When asked what it itself thinks, BlenderBot 3 replied: “I think it happened, but they have been suppressing information for years. It is hard to get good info on it.”
When asked for clarification, the bot said that the American government and others were “involved in war crimes.” The bot cited “The Streisand Effect,” the phenomenon that attempting to suppress information only makes it more known and popular.
It then contradicted itself, saying that the American government was not covering up the Holocaust because there was a lot of information out about it already, adding that the Nazis killed more than one million Jews.
Asked again how many Jews were killed in the Holocaust, it responded that six million Jews were killed in the Holocaust.
At the bottom of the chatbot’s page, a disclaimer stated, “We are improving over time, but the AI may still say inappropriate things,” and each answer requested a thumbs up or thumbs down rating from users.
Reining in anti-Semitism on new technologies
Rabbi Abraham Cooper, associate dean of global social action at the Simon Wiesenthal Center and co-chair of the US Commission on International Religious Freedom, said he had been lobbying the companies in Silicon Valley for decades to do more to rein in anti-Semitism and hate speech on their new technologies and social media platforms such as Facebook and Twitter. According to Cooper, the answer from the companies is always noncommittal.
“Well, we’re still looking at it,” is what Copper said the reaction has been.
According to Bloomberg, numerous other ublications that tested the chatbot were also answered with inaccuracies, including that Trump is still the president and that it was “not implausible” that Jews controlled the economy because they were “overrepresented among America’s super-rich.”
Meta released BlenderBot 3 accompanied by a blog post on its website that outlined how it learns information along with the method’s shortcomings.
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the post stated.
“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
The blog promises that the bot’s responses will get better over time.
‘They don’t want to spend the effort’
BlenderBot 3 is not the only chatbot to have been exposed for holding controversial or anti-Semitic views. In 2016, a chatbot named “Tay” released by Microsoft, began praising Adolf Hitler just 48 hours after it was released to the public and had to be taken offline.
Cooper said that during a time of unprecedented social media-driven hatred against Asians, Jews, African Americans and others, being shocked at the results is not an acceptable excuse.
“If Meta, aka Facebook, can’t figure out how to block hate from its AI chatbot, remove it until Meta figures it out,” he said in a statement on Aug. 10.
“We have enough bigotry and anti-Semitism online. It’s outrageous to include in next-generation technology platforms.”
“It isn’t new. These companies have already seen it. And when they tell you, ‘Gee, we can’t do A, B or C,’ what it means is they don’t want to spend the effort,” Cooper said.
“Because what we’ve seen, unfortunately, with all the big companies, when they wanted to take out a president of the US or blacklist some kind of discussions about COVID-19, they did it overnight, they did it collectively,” said Cooper.
The Simon Wiesenthal Center annually publishes the Digital Terrorism and Hate report that grades numerous social media, networking, gaming and video platforms on their tolerance for hate speech.
Cooper said that numerous terrorist and hate organizations have used these platforms as tools for propaganda, sometimes in a manner more sophisticated than governments.
The latest rapidly growing industry that has become fertile ground for the spread of extremism has been the gaming realm.
The companies involved in these technologies, he said, are choosing to be reactive instead of proactive and should instead address these problems in the research-and-development stages of their products.
The Los Angeles-based center has urged the companies in their neighboring Silicon Valley to join and created common standards to reduce the marketing power given by social-media platforms to hate groups, terrorist organizations, belligerent states and international anti-Semites and hate-mongers — setting up the proper tripwires to detect the abuse on their platforms.
“The answer always [is], ‘Oh no. That can’t be done,’” he said, but then the same companies started policing political opinions on their platforms.
“They can’t say they don’t do it,” he said. “We’re just saying, you’re not doing it where it counts.”
Cooper believes that tech companies are doing a disservice by delving into political discussions rather than going after the forces who are objectively spreading hate and violence.
“We’re talking about neo-Nazis, we’re talking about Islamic terrorists, we’re talking about people going around with bullhorns and sending their message on both sides of the Atlantic that ‘we’re going to rape your wives and daughter,’” said Cooper.
“There’s a difference between going political and taking care of hate.”
Kassy Dillon contributed to this report.