Norway’s Data Guardians Decry AI’s Unchecked Intrusions
Generative artificial intelligence (AI) has now become a staple in the tech industry, but not without critics. Data privacy experts in Norway are raising fresh concerns about potential issues surrounding these new technologies.
These AI systems, capable of generating near-human content in the form of text, images, and sound, have revolutionized the landscape, but at what cost?
This past June, the Norwegian Consumer Council took a stand by releasing a report called Ghost in the Machine - Addressing the Consumer Harms of Generative AI."
The report proposed a framework that would guide the development and use of generative AI, all while safeguarding human rights.
Meanwhile, Datatilsynet, Norway's data protection authority, turned vocal about potential infringements on the General Data Protection Regulation (GDPR) linked to these technologies.
Challenges of AI Data CollectionAccording to Tobias Judin, head of Datatilsynet's international section, AI's data collection process is a significant concern. These AI systems are mostly foundational models, making them versatile enough to be used in numerous applications.
Typically, these AI models pull information from a massive pool of open-source data, much of which is personal.
The trouble with this is twofold. Firstly, is it even legal to collect such a broad range of personal data? Many experts say no. Secondly, are people even aware that their data is being used in this way? The answer is probably not.Tobias JudinThese practices, he points out, appear to flout the GDPR principle of data minimization, which stipulates that data collection should be limited to what is essential.
Another worry is the quality and accuracy of the data, as it often includes information from contested sources, including unreliable web forums. Despite this, the data is still used for training the models, potentially leading to built-in biases and inaccuracies.
While some organizations may believe deleting the data after training resolves privacy issues, recent developments, such as model inversion attacks, suggest otherwise.These attacks work by making specific queries to the AI model to recover the original training data.
Addressing AI Compliance and EnforcementOne of the most significant issues emerging in this field pertains to data rectification and erasure.
Judin raises a concerning scenario, suggesting that if an authority were to demand the deletion of specific personal data from an organization, it might necessitate the erasure of the entire AI model.
User queries could be used for service improvements" or targeted advertising, enabling continuous data collection.This is because the data is deeply integrated into the model, posing a substantial compliance issue. Once the model goes live, it's nearly impossible to correct errors or inaccuracies it generates.
Reflecting on these issues, the Norwegian Consumer Council urges EU institutions to stand firm against lobbying pressures from major tech companies. The authority insists that these bodies enforce stringent consumer protection laws.
However, the Council emphasizes that legislation alone is insufficient. It suggests that proper enforcement is critical, pushing agencies to be adequately equipped with the necessary resources.
The post Norway's Data Guardians Decry AI's Unchecked Intrusions appeared first on The Tech Report.