Italy closes probes into AI firms after commitments on ‘hallucination’ risks

Updated 30 Apr, 2026 04:35pm 1 min read

Italy’s antitrust authority said on Thursday it had closed investigations into ​three AI companies over allegedly unfair ‌commercial practices involving generative artificial intelligence, after accepting binding commitments from them.

The regulator, known as ​the AGCM, also polices consumer rights.

It ​said it had targeted China’s DeepSeek, France’s ⁠Mistral AI SAS and Turkey’s Scaleup ​Yazilim Hizmetleri Anonim Şirketi over risks of ​so-called AI hallucinations — the generation of inaccurate or misleading content.

In response, the three companies have agreed to ​better inform users about hallucination risks ​via their websites and apps, adding permanent disclaimers ‌to ⁠their chatbot services, the authority said.

DeepSeek also agreed to invest in technology to reduce the risk of hallucinations, while acknowledging that ​current technology ​cannot ⁠prevent them entirely.

As part of its commitments, NOVA AI, the cross-platform ​chatbot service offered by Scaleup, ​agreed ⁠to make clear to consumers that its service provides a single interface for accessing ⁠several ​chatbots and does not aggregate ​or process their responses, AGCM said.

Read Comments