China’s Payment Association Warns Over Risks of Data Leaks While Using ChatGPT-Like AI Tools

Share

China’s payment & clearing industry association warned on Monday against using Microsoft-backed OpenAI’s ChatGPT and similar artificial intelligence tools due to “risks such as cross-border data leaks.”

“Payment industry staff must comply with laws and rules when using tools such as ChatGPT, and should not upload confidential information related to the country and the finance industry,” the Payment & Clearing Association of China said in a statement on Monday. The association is governed by the China’s central bank.

OpenAI has kept its artificial intelligence-powered chatbot off-limits to users in China, but the app is attracting huge interest in there, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in China are unable to create OpenAI accounts, virtual private networks and foreign phone numbers are helping some bypass those restrictions to access the chatbox.

Italy has temporarily banned ChatGPT and launched a probe over suspected breaches of privacy rules. Some European countries were studying whether stronger measures were needed.

Excitement in China over the chatbot has helped to fuel a rally in tech, media and telecom (TMT) shares, with analysts cautioning bubble risks.

Economic Daily, a Chinese state media outlet, published a commentary on Monday urging regulators to step up supervision and crackdown on speculation in the sector.

Chinese shares in computer, media and communications equipment tumbled between 3.4 percent and 5.6 percent on Monday.

© Thomson Reuters 2023


The newly launched Oppo Find N2 Flip is the first foldable from the company to debut in India. But does it have what it takes to compete with the Samsung Galaxy Z Flip 4? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.