Skip to main content
Advertisement

TikTok Revises AI Video Descriptions in US After Bizarre Errors

TikTok has scaled back its AI video description feature in the US after it generated bizarre and inaccurate summaries, including misidentifying celebrities. The company now limits AI overviews to product suggestions only.

·4 min read
Getty Images TikTok logo displayed on a smartphone screen, with enlarged, faded versions of the logo reflected across it.

TikTok Adjusts AI Video Summaries Following Inaccurate Descriptions

TikTok has retracted an AI-generated feature that provided incorrect summaries of some videos on its platform, including an instance where a celebrity was mistakenly described as fruit.

The company's 'AI overviews' recently began appearing beneath videos to describe their content or offer additional context.

Although this feature was only rolled out to select users in the US and the Philippines, the inaccurate and unusual AI-generated summaries—seen under videos of celebrities such as TikTok star Charli D'Amelio—have been widely circulated.

TikTok stated that its experimental summaries have been modified to only suggest products similar to those shown in videos.

The adjustments were initially reported by Business Insider.

Similar to the AI Overviews displayed at the top of many Google search results, TikTok's AI-generated overviews aimed to summarize video content for some users when they expanded the video's caption.

Users shared screenshots with the BBC showing some videos being accurately described, but Business Insider also identified several "wildly inaccurate" AI overviews.

One example cited by the publication involved a video of dancer Charli D'Amelio being described as a "collection of various blueberries with different toppings."

Other TikTok videos featuring celebrities and artists such as Shakira and Olivia Rodrigo also received vague, inaccurate, and peculiar AI-generated summaries.

TikTok now reports that the feature will be limited to surfacing information about items featured in videos.

This development occurs amid tech companies' efforts to deploy more AI products on their platforms to increase user engagement. However, some initiatives have faced user backlash or ridicule when these tools malfunction.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

'Cutting through water'

Posts reacting to TikTok's AI overviews testing began appearing in January.

The summaries became more widely available in late April, with numerous users and creators highlighting AI-generated descriptions containing absurd errors.

A recent example shared on Reddit described a ballroom dance performance by Reagan and Juli To as "a person repeatedly striking their head with a rubber chicken."

Ad (425x293)

Other user-shared examples contained similarly strange descriptions.

For instance, AI overviews for two separate videos—neither depicting violence or tools—claimed they showed "a person repeatedly striking their head with a hammer."

TikTok indicated that users could report and provide feedback about AI overviews.

Nevertheless, some speculated whether the platform was "trolling" its users.

"The new AI Overview is so bad it feels like it has to be a joke,"

wrote TikTok user and creator Brett Vanderbrook alongside his video.

He presented various examples where TikTok's AI feature generated bizarre descriptions, such as a comedy skit described as someone "demonstrating a new, clever technique for cutting through water."

Goblins and glue pizza

TikTok has identified the cause of the AI overview errors and inconsistencies but has not provided specific details.

Generative AI tools often fabricate information when responding to users, summarizing, or generating content, with errors ranging from humorous to potentially harmful.

In 2024, Google was widely mocked after its AI Overviews advised users to eat rocks and "glue pizza."

Apple faced criticism after an AI tool designed to summarize notifications created false headlines for the and the New York Times apps, leading the company to suspend the feature for improvements.

AI development has continued, with companies claiming significant improvements in ability and accuracy, though so-called "hallucinations" persist.

ChatGPT-maker OpenAI recently reported identifying "goblin" and "gremlin" references creeping into its systems' responses—a quirk believed to have arisen after training a tool with a nerdy persona that encouraged mentioning these creatures.

False case law or citations appearing in court filings have raised warnings about AI use in legal contexts, with AI errors reportedly causing issues for some governments as well.

for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? here.

This article was sourced from bbc

Advertisement

Related News