Apple to update AI news feature which has generated false information

Acknowledgement of issue comes after many complaints

Image:
Inaccurate news alerts likely to further fuel mistrust in traditional news sources

Apple has said it will update a new AI feature that has generated false news alerts on its latest iPhones – but not by making it more accurate.

Apple acknowledged the concerns for the first time yesterday and said it was working on a software update to “further clarify" when the notifications are summaries that have been generated by Apple Intelligence.

The company has been criticised for its lack of response to a number of complaints about the feature, which groups together notifications so that users can pick out key details quickly. According to Apple, it helps iphone users to focus.

However, the feature has generated some inaccurate alerts.

The BBC complained last month after an AI-generated summary of its headline falsely told some readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself. Last week, the feature told users that Luke Littler had won the PDC World Darts Championship before the match had begun and that Rafael Nadal had come out as gay.

The BBC has been particularly concerned because the notifications look like they are coming from the BBC.

"These AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content," the BBC said on Monday.

"It is critical that Apple urgently addresses these issues as the accuracy of our news is essential in maintaining trust."

In a statement to the BBC Apple said:

“Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback”.

“A software update in the coming weeks will further clarify when the text being displayed is summarisation provided by Apple Intelligence.”

A cautionary Fable

Meanwhile, some users of the online book club forum Fable, found their “2024 wrapped” feature used bigoted and racist language to describe their reading choices,

One user was advised to “surface for the occasional white author” and another was asked if they were “ever in the mood for a straight, cis white man’s perspective”.

Another was told that their taste for romantic comedy “has now set the bar for my cringe-meter.”

In an Instagram post this week, Mr Chris Gallello, the head of product at Fable, addressed the problem of AI-generated summaries on the app, saying that Fable began receiving complaints about “very bigoted racist language, and that was shocking to us”.

“As a company we underestimated how much work needs to be done to ensure these models are doing it in a responsible, safe way.”

In a follow-up video, Gallello confirmed that Fable would be removing three key features reliant on AI, including the wrapped summary.

"Having a feature that does any sort of harm in the community is unacceptable," he stated, acknowledging that more work needs to be done to ensure AI models operate responsibly and safely.

Computing says:

These stories both came about because Apple and Fable pushed out AI driven features before they were ready. Both should serve as a cautionary tale to companies desperately trying to launch generative AI driven functionality for commercial reasons before testing it properly.

The data that the Fable model was trained on clearly had some serious underlying bias. Certain social media sites and some very unsavoury corners of the internet contain almost nothing but a “straight, cis white man’s perspective” and it isn’t difficult to see how Fable might have underestimated the risk of biased data skewing its model, and the implications of that bias.

The Apple story is more worrying because the false alerts look as if they are coming from the BBC. Given the parlous state of public trust in traditional news sources like the BBC, the haste with which Apple made this feature available, and the slowness of its response looks irresponsible.

The fact that Apple’s response doesn’t emphasise the accuracy of the feature and simply focuses on making the attribution clearer, doesn’t exactly instill confidence in the company’s commitment to developing features built on responsible and ethical generative AI.