Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-04-04 09:00:00| Fast Company

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much buzz, and even more money, circulating in the industry that it feels almost sacrilegious to doubt that AI will make good on its promises to change the world. Deep research can do 1% of all knowledge work! Soon the internet will be designed for agents! Infinite Ghibli! And then you remember AI screws things up. All. The. Time. Hallucinationswhen a large language model essentially spits out information created out of whole clothhave been an issue for generative AI since its inception. And they are doggedly persistent: Despite advances in model size and sophistication, serious errors still occur, even in so-called advanced reasoning or thinking models. Hallucinations appear to be inherent to generative technology, a by-product of AI’s seemingly magical quality of creating new content out of thin air. They’re both a feature and a bug at the same time. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} In journalism, accuracy isn’t optionaland thats exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times. The intern that just doesn’t get it AI is occasionally described as an incredibly productive intern, since it knows pretty much everything and has superhuman ability to create content. But if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path. Bloomberg is hardly the first publication to run head-first into hallucinations. But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both.  The typical approach thus far is to slap a disclaimer onto the content. The Washington Posts Ask the Post AI is a good example, warning users that the feature is an “experiment” and encouraging users to “Please verify by consulting the provided articles.” Many other publications have similar disclaimers. It’s a strange world where a media company introduces a new feature with a label that effectively says, “You can’t rely on this.” Providing accurate information isn’t a secondary feature of journalismit’s the whole point. This contradiction is one of the strangest manifestations of the application of AI in media. Moving to a close enough world How did this happen? Arguably, media companies were forced into it. When ChatGPT and other large language models first began summarizing content, we were so blown away by their mastery of language that we weren’t as concerned about the fine print: “ChatGPT can make mistakes. Check important info.” And it turns out that for most users that was good enough. Even though generative AI often gets facts wrong, chatbots have seen explosive user growth. “Close enough” appears to be what the world is settling on.  It’s not a standard anyone sought out, but the media is slowly adopting it as more publications launch generative experiences with similar disclaimers. There’s an “If you can’t beat ’em, join ’em” aspect to this, certainly: As more people turn to AI search engines and chatbots for information, media companies feel pressure to either sign licensing deals to have their content included, or match those AI experiences with their own chatbots. Accuracy? Theres a disclaimer for that.  One notable holdout, however, is the BBC. So far, the BBC hasn’t signed any deals with AI companies, and it’s been a leader in pointing out the inaccuracies that AI portals create, publishing its own research on the topic earlier this year. It was also the BBC that ultimately convinced Apple to dial back its shoddy notification summaries on the iPhone, which were garbling news to the point of making up entirely false narratives. In a world where it’s looking increasingly fashionable for media companies to take licensing money, the BBC is architecting a more proactive approach. Somewhere along the waywhether out of financial self-interest or falling into Big Tech’s reality distortion fieldmany media companies began to buy into the idea that hallucinations were either not that big a problem or something that will inevitably be solved. After all, “Today is the worst this technology will ever be.” Think of pollution and coal plants. Its an ugly side effect, but one that doesnt stop the business from thriving. Thats how hallucinations function in AI: clearly flawed, occasionally harmful, yet toleratedbecause the growth and money keep coming. But those false outputs are deadly to an industry whose primary product is accurate information. Journalists should not sit back and expect Silicon Valley to simply solve hallucinations on its own, and theBBC is showing there’s a path to being part of the solution without evangelizing or ignoring the problem. After all, “Check important info” is supposed to be the media’s job. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}


Category: E-Commerce

 

Latest from this category

18.04If you use AI to write me that note, dont expect me to read it
18.04Tennessee just made an invisible update to its tourism siteand its brilliant
18.04Why are AI companies so bad at naming their models?
18.04How to stop shrinking yourself at work and start feeling like you belong
18.04This 3D-printed train station in Japan took less than 6 hours to build
18.045 steps to get your first corporate job after college
18.04TikTok is obsessed with this investor who bought 30 floors of a Chicago skyscraper
18.04The rise of vibe coding 
E-Commerce »

All news

18.04Yes Bank Q4 review: PAT may jump up to 44%, but NII faces margin headwinds
18.04Tennessee just made an invisible update to its tourism siteand its brilliant
18.04If you use AI to write me that note, dont expect me to read it
18.04Why are AI companies so bad at naming their models?
18.04This 3D-printed train station in Japan took less than 6 hours to build
18.04How to stop shrinking yourself at work and start feeling like you belong
18.045 steps to get your first corporate job after college
18.04Infosys' weak Q4 earnings has a price, brokerages slash targets by 9-13%
More »
Privacy policy . Copyright . Contact form .