It is correct that AI models are not databases of information or structured repositories of facts which are accessible in a direct way. However, it is wrong to suggest - as many people do - that such models cannot contain personal data. The definition of personal data is, in order to ensure a high level of protection of natural persons, very wide. All it takes is for information to relate to an identified or identifiable natural person (a data subject). The relation could be by content, purpose or effect. I argue that even data that is abstracted and only accessible indirectly, could be information relating to a data subject. That the information might be incorrect or that the controller's aim is not to store personal data does not matter. #EU #GDPR #PersonalData
Robin Nariman’s Post
More Relevant Posts
-
Webinar this Thursday with GRC expert, Michael Rasmussen 🎉 : Essential Data Governance Strategies for Effective AI Compliance. Link below 👇 to register! #datagoverance #ai #archive360
🗺️Navigate Regulatory Standards with Data Governance Compliance is non-negotiable in the AI landscape. Robust data governance helps you meet regulatory standards like GDPR and HIPAA, safeguarding your organization from fines and reputational damage. 📜✅ Explore this topic further by reading our full blog https://bit.ly/4dU8Auy Its not too late!! Join us on June 6th for a webinar that dives into AI data governance and compliance. https://bit.ly/3KaQ7MD! #Compliance #DataGovernance #AI #GDPR #HIPAA #Webinar
To view or add a comment, sign in
-
🗺️Navigate Regulatory Standards with Data Governance Compliance is non-negotiable in the AI landscape. Robust data governance helps you meet regulatory standards like GDPR and HIPAA, safeguarding your organization from fines and reputational damage. 📜✅ Explore this topic further by reading our full blog https://bit.ly/4dU8Auy Its not too late!! Join us on June 6th for a webinar that dives into AI data governance and compliance. https://bit.ly/3KaQ7MD! #Compliance #DataGovernance #AI #GDPR #HIPAA #Webinar
To view or add a comment, sign in
-
The text of the #GDPR as it was published in 2016 has been more and more completed by other EU Directives and Regulations. Here, an example. It is the article 8 of the recent approved #platform directive obliging to carry out a DPIA. Another example is the article 10.5 of the future #AI Act allowing providers of high risk AI systems to process special categories of personal data to detect the potential bias in their tools. And these two examples are not unique. The European Commission should provide us with a sort of "recast" GDPR to allow us to not forget these numerous add-ons to the GDPR.
To view or add a comment, sign in
-
Good read if you want to know more about general-purpose AI system hallucinations.
"AI hallucinations — instances where general-purpose artificial intelligence systems generate convincing, yet false, information — present significant challenges under the EU General Data Protection Regulation, especially regarding the principle of accuracy and data subject rights. Recent complaints against platforms like #ChatGPT have underscored these issues." Continue reading "Ghosts in the algorithm: Busting #AI hallucinations under the #GDPR" by Theodore Christakis: https://bit.ly/49j6MZQ
To view or add a comment, sign in
-
Excellent paper by Theodore Christakis titled: Ghosts in the algorithm: Busting AI hallucinations under the GDPR "AI hallucinations occur when, for several reasons, general-purpose AI systems produce content that is convincing but false or nonsensical. Highly publicized instances in 2022–23 led critics to label these models negatively, accusing them of disseminating "careless speech." "Despite significant improvements in 2024, hallucinations persist, undermining AI reliability, especially in contexts where accuracy is crucial, such as legal matters, health care, and news reporting." "The inaccuracies, or hallucinations, are unintended artifacts of the generative process, not deliberate misrepresentations of stored personal data. Since LLMs lack discrete records and do not function as databases, applying the GDPR's accuracy requirement in the traditional sense may be neither feasible nor appropriate. The Hamburg DPA thus emphasizes a widely accepted view: the outputs of LLMs are probabilistic in nature, and despite the risk of occasional regurgitations, LLMs are "not databases from which outputs are pulled." #ai #hallucinations #llms #genai #deeplearning #aiengineering #aigovernance #bigdata #datagovernance #privacy #cybersecurity #privacyengineering
"AI hallucinations — instances where general-purpose artificial intelligence systems generate convincing, yet false, information — present significant challenges under the EU General Data Protection Regulation, especially regarding the principle of accuracy and data subject rights. Recent complaints against platforms like #ChatGPT have underscored these issues." Continue reading "Ghosts in the algorithm: Busting #AI hallucinations under the #GDPR" by Theodore Christakis: https://bit.ly/49j6MZQ
To view or add a comment, sign in
-
AI hallucinations—when AI systems produce information that appears credible but is actually false—pose notable challenges under the EU General Data Protection Regulation (GDPR). These issues directly impact the principle of accuracy and the rights of data subjects, as highlighted by recent complaints involving platforms like #ChatGPT. For a deeper dive into this topic, check out "Ghosts in the Algorithm: Busting AI Hallucinations under the GDPR" by Theodore Christakis: https://bit.ly/49j6MZQ #dataprotection #privacy #AI
"AI hallucinations — instances where general-purpose artificial intelligence systems generate convincing, yet false, information — present significant challenges under the EU General Data Protection Regulation, especially regarding the principle of accuracy and data subject rights. Recent complaints against platforms like #ChatGPT have underscored these issues." Continue reading "Ghosts in the algorithm: Busting #AI hallucinations under the #GDPR" by Theodore Christakis: https://bit.ly/49j6MZQ
To view or add a comment, sign in
-
"AI hallucinations — instances where general-purpose artificial intelligence systems generate convincing, yet false, information — present significant challenges under the EU General Data Protection Regulation, especially regarding the principle of accuracy and data subject rights. Recent complaints against platforms like #ChatGPT have underscored these issues." Continue reading "Ghosts in the algorithm: Busting #AI hallucinations under the #GDPR" by Theodore Christakis: https://bit.ly/49j6MZQ
To view or add a comment, sign in
-
This is going to be good! Bart Willem Schermer will bring the facts on all you need to know about the new AI Act in Europe and Valital Technologies' Alex Aoun will talk about the practical aspects of using AI in background checks, with Alice Q. in the mix to bring it all together. Wish I could be there, PBSA Europe. #Valital #VerifyThenTrust #AMaaS
The intersection of #GDPR, #AI and #dataprivacy in Europe is a hot topic with significant implications for businesses worldwide. Join us for a thought-provoking session at the PBSA Europe Summit with Bart Willem Schermer, Considerati and Alex Aoun, Valital Technologies, moderated by our Chair, Alice Q.. A presentation and panel discussion delving into the new AI Act in Europe, prohibited practices, and GDPR considerations, exploring the boundaries of background screening. There is still time to register - https://lnkd.in/dtYTUmCW #PBSAEurope #backgroundscreening #EuropeSummit
To view or add a comment, sign in
-
🔍 Unlock the Secrets to GDPR-Compliant AI 🔍 Are you navigating the complexities of implementing the use of AI while ensuring GDPR compliance? Our new blog post provides a detailed roadmap to help your organisation align AI innovations with data protection laws. As AI systems require large datasets to function optimally, they often clash with GDPR’s data minimisation and purpose limitation principles. This can make compliance feel like a moving target. Transparency and consent are other critical areas where AI implementation can falter. We discuss how to create clear, concise, and effective consent mechanisms that keep pace with AI’s evolving capabilities. Additionally, investing in explainable AI is crucial for meeting GDPR’s transparency requirements and enhancing stakeholder trust. Read more here - https://www.rfr.bz/l13f0d4 #GDPR #ArtificialIntelligence #PrivacyByDesign #DataProtection #TechInsights #AIInnovation #ComplianceStrategies
To view or add a comment, sign in
-
The CNIL (France's data protection authority) has published new guidance on deploying generative AI systems, focusing on data protection and responsible use. Here are the key points: 👉 Identify concrete needs before deployment 👉 Define allowed and prohibited uses 👉 Acknowledge system limitations and risks 👉 Opt for robust, secure deployment methods and local systems 👉 Train and educate end-users on proper usage and potential risks 👉 Implement governance to ensure GDPR compliance This publication aims to help organizations implement AI responsibly and securely. #AI #DataProtection #GDPR #CNIL #TechInnovation #ResponsibleAI
To view or add a comment, sign in
Project Manager | Scrum Master | SAFe 6.0, PSM, Prince 2 | CIPP/E, CIPM, CEH | solely my views
1moLinkability is the key concept here. Proper de-identification is extremely hard to achieve and certainly last priority (if at all) to AI developers.