⚖️ EDPS Releases Guidelines on the Use of Generative AI & privacy for EU Institutions ⚖️ The EDPS (European Data Protection Supervisor) has just released comprehensive guidelines on generative AI and personal data protection for EU institutions, bodies, offices, and agencies (EUIs)! This greatly promotes the critical importance of data privacy in the context of artificial intelligence. If you're unfamiliar with the EDPS, it is the independent supervisory authority responsible for protecting personal data and privacy and promoting good practices within EU institutions and bodies. The guidelines emphasise data protection’s core principles with concrete examples to help anticipate risks, challenges, and opportunities in generative AI systems and tools. For clarity, the EDPS has not issued these guidelines as AI supervisor for EU institutions under the EU’s Artificial Intelligence Act, for which a separate strategy is being prepared. For more details, read the full guidelines: https://loom.ly/eX2p-Rk #DataProtection #AI #GenerativeAI #Privacy #EDPS #EUCompliance #Innovation #DataSecurity
MOD1 AG’s Post
More Relevant Posts
-
How do we address the complex issues of AI and privacy in the Global South? What are the unique challenges and opportunities in this context? At PrivacyNama 2024, we will explore these questions and more, discussing the regulatory frameworks needed to balance innovation with privacy protection. With sessions on AI sovereignty, cross-border data flows, and the application of privacy principles to AI, this event promises to be a comprehensive look at the state of AI and privacy worldwide. Read more about the event here. Join us on October 3rd & 4th for two days of thought-provoking conversations with global experts. Register here: https://lnkd.in/dr5KadKp #PrivacyNama2024 #AI #GlobalSouth #TechPolicy #PrivacyRights
To view or add a comment, sign in
-
The latest OECD report on AI and privacy is hot off the press, and it's packed with insights that are sure to shake things up! 🌟 As AI continues to evolve, especially with the rise of generative AI, it's more important than ever to tackle the privacy and data governance challenges that come with it. This report dives deep into these issues and offers some fantastic insights and recommendations. The report underscores the importance of international cooperation in creating AI systems that respect and support privacy. It's a positive step towards harmonizing regulatory frameworks and ensuring robust privacy protections in the age of AI. #AI #Privacy #DataGovernance #OECD #Collaboration #Innovation #AIPrinciples #PrivacyGuidelines #DataPrivacy #Mysimplifiedlaw #TeamWorkMakesTheDreamWork
To view or add a comment, sign in
-
Discover how Integral Ad Science (IAS) leads with transparency and ethical AI practices, earning TrustArc's Responsible AI Certification for data protection and privacy. Read the Latest full News - https://lnkd.in/d_dMdbnC #martech #martechedge #marketinginsights #ai #aicertification #dataprotection #privacy #businessgrowth #innovation #transformation #technology
Integral Ad Science (IAS) Earns TrustArc's Responsible AI Certification
martechedge.com
To view or add a comment, sign in
-
The latest thinking and direction from CIPL on how to ensure a progressive and proportionate interpretation of data protection rules for the AI training and AI development. Data protection laws can not be an impediment to the beneficial development and use of AI, that is clear. And here is how to do it! #CIPL #respinsibleAI #dataprivacy
📢 NEW FROM CIPL - Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators In this discussion paper, CIPL considers the following key privacy and data protection concepts and explores how they can be effectively applied to the development and deployment of genAI models and systems: - Fairness; - Collection limitation; - Purpose specification; - Use limitation; - Individual rights; - Transparency; - Organizational accountability; and - Cross-border data transfers. The analysis in this paper builds on our previous work on the intersection of data protection, artificial intelligence, and organizational accountability and synthesizes it for the context of genAI models and systems. Our work has included convening global regulators, academia, and industry to discuss the emerging tensions between AI technologies and data protection principles and develop potential solutions that resolve or mitigate these tensions. See the full paper below 👇 Download a copy here: https://lnkd.in/em7J4tmC #genAI #generativeAI #research #artificialintelligence #privacy #dataprotection
To view or add a comment, sign in
-
Privacy risks in generative AI are a recurring concern, as AI inherently revolves around data. Generative AI systems use foundation models trained on massive datasets from various sources, such as web content, licensed materials, and academic or industry collections. These models analyze statistical relationships within the data to respond to diverse user prompts and generate predictive, useful outputs. Datasets and user prompts may include personal and sensitive information. There is a risk that these systems could inadvertently leak personal data from training datasets, produce inaccurate information (known as “hallucinations”), or be exploited by malicious actors to bypass safeguards against data disclosure. As a result, there is growing discussion and concern about how data privacy laws should apply, the new risks posed by generative AI, and how to maximize AI’s benefits while protecting personal information. This discussion paper by CNIL offers a fresh perspective on applying data privacy principles in the rapidly evolving field of generative AI. Here are some key takeaways: ✅Understanding the AI Lifecycle: Applying data protection principles requires understanding the AI lifecycle. For example, during development, consider the principle of data minimization—avoid using sensitive data unless necessary, or rely on de-identified data. ✅ Sensitive Data and Bias Reduction: While processing sensitive data poses risks, it can reduce bias in outputs. Laws and regulations should support the responsible use of sensitive data for bias reduction and content safety, especially in applications with significant legal or societal impacts. ✅ Privacy-Enhancing Technologies: Developers should adopt privacy-enhancing and privacy-preserving technologies (PETs/PPTs) to ensure models can access rich datasets while minimizing privacy risks. ✅ Transparency: Transparency must be meaningful, context-specific, and comply with legal requirements. ✅ Organizational Accountability: Organizations must establish comprehensive, risk-based AI and data privacy programs. Regulators should incentivize accountability in AI development and deployment. Data privacy laws should be seen as a guiding tool, not a barrier, helping developers embed privacy principles into every phase of the AI development lifecycle. #dataprotection #artificialintelligence #ai #aigovernance #privacypro
📢 NEW FROM CIPL - Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators In this discussion paper, CIPL considers the following key privacy and data protection concepts and explores how they can be effectively applied to the development and deployment of genAI models and systems: - Fairness; - Collection limitation; - Purpose specification; - Use limitation; - Individual rights; - Transparency; - Organizational accountability; and - Cross-border data transfers. The analysis in this paper builds on our previous work on the intersection of data protection, artificial intelligence, and organizational accountability and synthesizes it for the context of genAI models and systems. Our work has included convening global regulators, academia, and industry to discuss the emerging tensions between AI technologies and data protection principles and develop potential solutions that resolve or mitigate these tensions. See the full paper below 👇 Download a copy here: https://lnkd.in/em7J4tmC #genAI #generativeAI #research #artificialintelligence #privacy #dataprotection
To view or add a comment, sign in
-
📢 NEW FROM CIPL - Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators In this discussion paper, CIPL considers the following key privacy and data protection concepts and explores how they can be effectively applied to the development and deployment of genAI models and systems: - Fairness; - Collection limitation; - Purpose specification; - Use limitation; - Individual rights; - Transparency; - Organizational accountability; and - Cross-border data transfers. The analysis in this paper builds on our previous work on the intersection of data protection, artificial intelligence, and organizational accountability and synthesizes it for the context of genAI models and systems. Our work has included convening global regulators, academia, and industry to discuss the emerging tensions between AI technologies and data protection principles and develop potential solutions that resolve or mitigate these tensions. See the full paper below 👇 Download a copy here: https://lnkd.in/em7J4tmC #genAI #generativeAI #research #artificialintelligence #privacy #dataprotection
To view or add a comment, sign in
-
Are you constantly considering how existing data protection principles can possibly relate to generative AI? Well, we have done the heavy lifting for you (Alexa, play the 12 Days of Christmas song). CIPL goes in-depth in this latest report on how key global data protection principles can be applied to the development and deployment of GenAI. We also provide a list of recommendations for organisations and policy makers when assessing these technologies. Some of them include: 🧑🧑🧒🧒 Apply principles separately at each phase in the AI lifecycle 💻 Legitimate interests is a relevant legal basis for training AI on publicly available data, such as through web scraping 🔐 Sensitive personal data is actually key to non-biased and non-discriminatory GenAI models ⚔️ Privacy-enhancing and privacy-preserving technologies are and should be used by developers and encouraged by regulators ⚖️ Fairness in GenAI means having accurate and accessible models that do not discriminate 📀 Data minimisation is not in conflict with processing large amounts of data, which GenAI models need, it’s about limiting collection and use to what is necessary To find out the other 8 recommendations take a read of our paper below 👇
📢 NEW FROM CIPL - Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators In this discussion paper, CIPL considers the following key privacy and data protection concepts and explores how they can be effectively applied to the development and deployment of genAI models and systems: - Fairness; - Collection limitation; - Purpose specification; - Use limitation; - Individual rights; - Transparency; - Organizational accountability; and - Cross-border data transfers. The analysis in this paper builds on our previous work on the intersection of data protection, artificial intelligence, and organizational accountability and synthesizes it for the context of genAI models and systems. Our work has included convening global regulators, academia, and industry to discuss the emerging tensions between AI technologies and data protection principles and develop potential solutions that resolve or mitigate these tensions. See the full paper below 👇 Download a copy here: https://lnkd.in/em7J4tmC #genAI #generativeAI #research #artificialintelligence #privacy #dataprotection
To view or add a comment, sign in
-
It was a pleasure to contribute to this paper alongside Shravan Subramanyam and Monika Tomczak-Gorlikowska. 👏 Great to be able to provide industry knowledge on such a relevant and crucial topic in the privacy space. 🌍 #Privacy #LegalTech #PrivacyandAI #GenAI
📢 NEW FROM CIPL - Applying Data Protection Principles to Generative AI: Practical Approaches for Organizations and Regulators In this discussion paper, CIPL considers the following key privacy and data protection concepts and explores how they can be effectively applied to the development and deployment of genAI models and systems: - Fairness; - Collection limitation; - Purpose specification; - Use limitation; - Individual rights; - Transparency; - Organizational accountability; and - Cross-border data transfers. The analysis in this paper builds on our previous work on the intersection of data protection, artificial intelligence, and organizational accountability and synthesizes it for the context of genAI models and systems. Our work has included convening global regulators, academia, and industry to discuss the emerging tensions between AI technologies and data protection principles and develop potential solutions that resolve or mitigate these tensions. See the full paper below 👇 Download a copy here: https://lnkd.in/em7J4tmC #genAI #generativeAI #research #artificialintelligence #privacy #dataprotection
To view or add a comment, sign in
-
Anonos: Data without the drama’s latest whitepaper highlights How to Safeguard data privacy within LLMs without Sacrificing Performance. Ted Myerson and Gary LaFever tackle the critical balance between AI innovation and data privacy. Gartner predicts that by 2027, noncompliance with data protection laws will impact AI deployments. This makes the Anonos platform and their patented tech even more vital. Their whitepaper reveals how to protect sensitive data in LLMs without sacrificing performance, achieving nearly identical results with protected data in fine-tuning and similar performance in Retrieval Augmented Generation (RAG). Read more here: #AI #DataPrivacy #Innovation #LLMs #TechTrends #AIRegulation
How to Mitigate LLM Privacy Risks in Fine-Tuning and RAG | Anonos
anonos.com
To view or add a comment, sign in
-
The intersection of AI and privacy is more relevant than ever. At PrivacyNama 2024, we’ll discuss how AI’s growth is influencing privacy laws and data sovereignty. Join us on October 3rd & 4th for a deep dive into these topics. Read more about the event here. Register to attend: https://lnkd.in/dr5KadKp #AI #DataProtection #PrivacyNama2024 #TechPolicy #GlobalSouth
Welcome! You are invited to join a webinar: PrivacyNama 2024 | October 03 & October 04 . After registering, you will receive a confirmation email about joining the webinar.
us02web.zoom.us
To view or add a comment, sign in
1,033 followers