The Italian government’s privacy watchdog has announced that Italy will temporarily block the artificial intelligence software ChatGPT due to a recent data breach.
The Italian Data Protection Authority is investigating a possible violation of the EU’s data protection rules and has taken provisional action until ChatGPT can ensure user privacy.
While some schools and universities around the world have already blocked ChatGPT from their networks due to plagiarism concerns, it remains unclear how Italy will block the platform at a national level.
The EU’s General Data Protection Regulation has been cited by the Italian watchdog, which has given OpenAI 20 days to report on the measures taken to ensure user data privacy, or face a fine of up to €20 million ($22 million) or 4% of its annual global revenue. The data breach, which occurred on March 20, involved users’ conversations and subscriber payment information. OpenAI had previously taken ChatGPT offline to fix a bug that allowed some users to see the titles of other users’ chat history.
The Italian watchdog has criticized OpenAI for its “massive collection and processing of personal data” used to train the platform’s algorithms, and for not notifying users whose data it collects. It has also raised concerns over the false information ChatGPT can sometimes generate about individuals, the lack of age verification, and exposure of children to inappropriate content.
Couple of days ago, billionaire entrepreneur and Twitter chief Elon Musk and other tech leaders called for a pause in the development of powerful artificial intelligence (AI) systems to ensure their safety.
The demand was made in an open letter titled “Pause Giant AI Experiments” and has been signed by over 1,000 people, including Musk and Apple co-founder Steve Wozniak.
The letter comes in response to the release of GPT-4 by San Francisco startup OpenAI, which is a more advanced version of its AI chatbot ChatGPT.
The letter suggests a six-month pause in the training of AI systems more powerful than GPT-4 to develop safety protocols, AI governance systems, and ensure that AI systems are more accurate, safe, trustworthy and loyal.