US FTC Opens Investigation Into ChatGPT Maker OpenAI, Claims Firm Broke Consumer Protection Laws: Report

Share

The US Federal Trade Commission has opened an investigation into OpenAI, the maker of ChatGPT, on claims it has run afoul of consumer protection laws by putting personal reputations and data at risk, the Washington Post reported on Thursday.

The reported move marks the strongest regulatory threat to the Microsoft-backed startup that kicked off the frenzy in generative artificial intelligence, enthralling consumers and businesses while raising concerns about its potential risks.

The FTC this week sent a 20-page demand for records about how OpenAI addresses risks related to its AI models, the Post said, citing a document. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers, the newspaper added.

The FTC and OpenAI did not immediately respond to Reuters’ requests for comment.

As the race to develop more powerful AI services accelerates, regulatory scrutiny is growing of the technology that could upend the way societies and businesses operate.

Global regulators are aiming to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, Reuters reported in May.

In the United States, Senate Majority Chuck Schumer has called for “comprehensive legislation” to advance and ensure safeguards on AI and will hold a series of forums later this year.

OpenAI had in March also run into trouble in Italy, where the regulator had ChatGPT taken offline over accusations that OpenAI violated the European Union’s GDPR – a wide-ranging privacy regime enacted in 2018.

ChatGPT was reinstated later after the US company agreed to install age verification features and let European users block their information from being used to train the AI model.

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.