AI hallucinations are getting worse – and they're here to stay

AI chatbots just got a reasoning upgrade—but instead of getting smarter, they’re hallucinating more. OpenAI’s new o3 and o4-mini models, released in April 2025, are making up facts nearly 2 to 3 times more often than their predecessor. That’s a big red flag for industries like paper packaging, where AI is increasingly used to analyze regulations, sustainability data, and supply chain trends. If your chatbot misreads an FSC certification rule or fabricates recycling stats, that’s not just a glitch—it’s a compliance risk. The takeaway? AI’s not your oracle yet—fact-check everything.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

Comments

Popular posts from this blog

The Middle East Has Entered the AI Group Chat

US-Saudi $142 Billion Defense Deal Sparks Questions, Few Answers

Student Busts Teacher Using AI, Demands Tuition Refund