in , , , , , ,

Misinformation machines? Tech titans grappling with how to stop chatbot ‘hallucinations’


Tech giants are ill-prepared to combat “hallucinations” generated by artificial intelligence platforms, industry experts warned in comments to Fox News Digital, but corporations themselves say they’re taking steps to ensure accuracy within the platforms. 

AI chatbots, such as ChatGPT and Google’s Bard, can at times spew inaccurate misinformation or nonsensical text, referred to as “hallucinations.” 

“The short answer is no, corporation and institutions are not ready for the changes coming or challenges ahead,” said AI expert Stephen Wu, chair of the American Bar Association Artificial Intelligence and Robotics National Institute, and a shareholder with Silicon Valley Law Group. 

MISINFORMATION MACHINES? COMMON SENSE THE BEST GUARD AGAINST AI CHATBOT ‘HALLUCINATIONS,’ EXPERTS SAY

Often, hallucinations are honest mistakes made by technology that, despite promises, still possess flaws. 

Companies should have been upfront with consumers about these flaws, one expert said. 

“I think what the companies can do, and should have done from the outset … is to make clear to people that this is a problem,” Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University in California, told Fox News Digital. 

AI hallucinations

Consumers need to be wary of misinformation from AI chatbots, just as they would be with any other information source. (Getty images)

“This shouldn’t have been something that users have to figure out on their own. They should be doing much more to educate the public about the implications of this.”

Large language models, such as the one behind ChatGPT, take billions of dollars and years to train, Amazon CEO Andy Jassy told CNBC last week. 

In building Amazon’s own foundation model Titan, the company was “really concerned” with accuracy and producing high-quality responses, Bratin Saha, an AWS vice president, told CNBC in an interview.

Platforms have spit out erroneous answers to what seem to be simple questions of fact.

Other major generative AI platforms such as OpenAI’s ChatGPT and Google Bard, meanwhile, have been found to be spitting out erroneous answers to what seem to be simple questions of fact.

In one published example from Google Bard, the program claimed incorrectly that the James Webb Space Telescope “took the very first pictures of a planet outside the solar system.” 

It did not.

Google has taken steps to ensure accuracy in its platforms, such as adding an easy way for users to “Google it” after inserting a query into the Bard chatbot.

AI photo

Despite steps taken by the tech giants to stop misinformation, experts were concerned about the ability to completely prevent it. (REUTERS/Dado Ruvic/Illustration)

Microsoft’s Bing Chat, which is based on the same large language model as ChatGPT, also links to sources where users can find more information about their queries, as well as allowing users to “like” or “dislike” answers given by the bot.

“We have developed a safety system including content filtering, operational monitoring and abuse detection to provide a safe search experience for our users,” a Microsoft spokesperson told Fox News Digital. 

“Corporation and institutions are not ready for the changes coming or challenges ahead.” — AI expert Stephen Wu

“We have also taken additional measures in the chat experience by providing the system with text from the top search results and instructions to ground its responses in search results. Users are also provided with explicit notice that they are interacting with an AI system and advised to check the links to materials to learn more.”

In another example, ChatGPT reported that late Sen. Al Gore Sr. was “a vocal supporter of Civil Rights legislation.” In actuality, the senator vocally opposed and voted against the Civil Rights Act of 1964.

MISINFORMATION MACHINES? AI CHATBOT ‘HALLUCINATIONS’ COULD POSE POLITICAL, INTELLECTUAL, INSTITUTIONAL DANGERS

Despite steps taken by the tech giants to stop misinformation, experts were concerned about the ability to completely prevent it. 

“I don’t know that it is [possible to be fixed]” Christopher Alexander, chief communications officer of Liberty Blockchain, based in Utah, told Fox News Digital. “At the end of the day, machine or not, it’s built by humans, and it will contain human frailty … It is not infallible, it is not omnipotent, it is not perfect.”

Chris Winfield, the founder of tech newsletter “Understanding A.I.,” told Fox News Digital, “Companies are investing in research to improve AI models, refining training data and creating user feedback loops.”

Amazon Web Services

In this photo illustration, an Amazon AWS logo is seen displayed on a smartphone. (Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images)

“It’s not perfect but this does help to enhance A.I. performance and reduce hallucinations.” 

These hallucinations could cause legal trouble for tech companies in the future, Alexander warned. 

“The only way are really going to look at this seriously is they are going to get sued for so much money it hurts enough to care,” he said. 

“The only way they are really going to look at this seriously is they are going to get sued for so much money it hurts enough to care.” — Christopher Alexander

The ethical responsibility of tech companies when it comes to chatbot hallucinations is a “morally gray area,” Ari Lightman, a professor at Carnegie Melon University in Pittsburgh, told Fox News Digital. 

Despite this, Lightman said creating a trail between the chatbot’s source, and its output, is important to ensure transparency and accuracy. 

Wu said the world’s readiness for emerging AI technologies would have been more advanced if not for the colossal disruptions caused by the COVID-19 panic

“AI response was organizing in 2019. It seemed like there was so much excitement and hype,” he said. 

ChatGPT app shown on a iPhone screen with many apps.

Closeup of the icon of the ChatGPT artificial intelligence chatbot app logo on a cellphone screen — surrounded by the app icons of Twitter, Chrome, Zoom, Telegram, Teams, Edge and Meet. (iStock)

“Then COVID came down and people weren’t paying attention. Organizations felt like they had bigger fish to fry, so they pressed the pause button on AI.”

CLICK HERE TO GET THE FOX NEWS APP

He added, “I think maybe part of this is human nature. We’re creatures of evolution. We’ve evolved [to] this point over millennia.”

He also said, “The changes coming down the pike so fast now, what seems like each week — people are just getting caught flat-footed by what’s coming.”



Source link

What do you think?

Written by 246webagency

Leave a Reply

Your email address will not be published.

GIPHY App Key not set. Please check settings

House Republicans, Manhattan DA End Fight Over Trump Inquiry

Wrong-place shootings have plagued US for decades despite calls for reform