Saturday, August 10, 2024

AI chatbots can ‘hallucinate’ and make things up

"When an AI model "hallucinates," it generates fabricated information in response to a user's prompt, but presents it as if it's factual and correct.
Say you asked an
AI chatbot to write an essay on the Statue of Liberty. The chatbot would be hallucinating if it stated that the monument was located in California instead of saying it's in New York.
This happens because large language models, commonly referred to as
AI chatbots, are trained on enormous amounts of data, which is how they learn to recognize patterns and connections between words and topics. They use this knowledge to interpret prompts and generate new content, such as text or photos.
But since AI chatbots are essentially predicting the word that is most likely to come next in a sentence, they can sometimes generate outputs that sound correct, but aren't actually true.
In fact, "
hallucinate," in the AI sense, is Dictionary.com's word of the year, chosen because it best represents the potential impact AI may have on "the future of language and life." CNBC

No comments:

Post a Comment