T

he New York Times reports that an OpenAI's investigation into its latest GPT o3 and GPT o4-mini large LLMs found they are substantially more prone to hallucinating, or making up false information, than the previous GPT o1 model.

"The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent," the Times says.

Lees verder: https://www.pcgamer.com/software/ai/chatgpts-hallucination-problem-is-getting-worse-according-to-openais-own-tests-and-nobody-understands-why/