TechnologyViral News

New Study Has Found On How to Spot AI hallucinations

Researchers have discovered a new way to help figure out when generative AI is likely to “hallucinate,” which is when it makes up facts because it doesn’t know the answer to a question and stops this from happening.

A team of researchers from the University of Oxford did a new study that created a statistical model that could tell when a question asked of a large language model (LLM) that powers a generative AI robot was likely to get the wrong answer.

large language model (LLM) that powers a generative AI robot

Their work has been reported in the Nature Journal.

One big worry about generative AI models is that they might cause hallucinations because of how advanced the technology is and how they can talk, which means they can believe false information to answer a question.

Also Read:  Microsoft lntroduce AI that Noted Everything You do on Your Device

More and more students are using generative AI tools to help them with their homework and research, which is something that many models are advertised as being good at. However, many experts in the field and AI scientists are calling for more to be done to spot AI hallucinations, especially when it comes to medical or legal questions.

According to researchers at the University of Oxford, they have found a way to tell when a model is sure of an answer and when it is just making it up.

It can be hard to tell when an LLM is sure of their answer and when they are just making something up because they are very good at saying the same thing in different ways, according to the study’s author, Dr. Sebastian Farquhar.

“With older methods, it wasn’t possible to tell the difference between a model not knowing what to say and not knowing how to say it.” Our new approach is effective in solving this issue.

Nevertheless, Dr. Farquhar said that more work needed to be done to fix the mistakes that AI models sometimes make.

He explained, “Semantic uncertainty helps with some problems of reliability, but this is only part of the story.”

“This new method won’t catch it if an LLM makes the same mistakes over and over again.” The most dangerous AI mistakes happen when a system is sure it is right and follows a set of rules.

“There’s still a lot to do.”

Back to top button