Privacy and AI #11

Privacy and AI #11

In this edition of Privacy and AI:

PRIVACY AND AI GIVEAWAY (CLOSED)

PRIVACY

• Cisco 2024 Data Privacy Benchmark Study

• The State of Data Privacy In 2024

• Swedish DPA Annual Report 2023 - focusing guidance on innovation

• Data protection in innovation - Swedish Agency for Digital Government

ARTIFICIAL INTELLIGENCE

• Governing AI in organizations

• ISO 42005 - AI System Impact Assessment is open for comments

• Generative AI and IP rights

• European AI Office

• Introduction to AI Assurance (UK Gov)

• City of San Jose (CA) AI Reviews & Algorithm Register

• Responsible Use Guide for LLMs (Meta)

• Updates on AI4Gov project

PRIVACY EVENTS IN UAE NEXT WEEK

• Privacy Summit Dubai 2024 (4th March) at DIFC Academy

• Privacy Summit Abu Dhabi 2024 (7th March) at ADGM



PRIVACY AND AI GIVEAWAY (CLOSED)

Last year I launched my book "Privacy and AI" where I explore the challenges of processing personal data with AI systems.

While I could foresee more interest in AI, being this a work that addresses a very specific issue, I didn't honestly expect it to be purchased by many professionals. I was wrong. I'm very happy that it was received with huge enthusiasm by privacy and AI professionals.

I acknowledge that it may be unaffordable for professionals working in low or middle-income countries. As I did during the last two years with GDPR in Charts, I gave away 30 digital copies of Privacy and AI for free.

Soon after the give away was announced, the more than 50 privacy and AI pros from all around the globe contacted me to get their copies. It was as usual a very rewarding activity and I plan to continue doing it in the years to come.

The give away is currently closed, since privacy and AI professionals

The book can be found here (digital) or on Amazon here

Link to the giveaway here

Let's celebrate the Data Protection Day!



PRIVACY

Cisco 2024 Data Privacy Benchmark Study

Cisco has published its 2024 report titled "Privacy as an Enabler of Customer Trust"

Key findings are

1. Privacy has become a critical element and enabler of customer trust, with 94% of organizations saying their customers would not buy from them if they did not protect data properly.

2. Organizations strongly support privacy laws around the world, with 80% indicating legislation has had a positive impact on them.

3. The economics of privacy remain attractive, with 95% saying benefits exceed costs and the average organization realizing a 1.6x return on their privacy investment.

4. There has been relatively slow progress on building customer confidence with respect to AI; 91% of organizations still recognize they need to do more to reassure their customers.

5. Organizations are already getting significant value from generative AI applications, but they’re also concerned about the risks to their intellectual property or that the data entered could be shared with competitors or the public

6. Organizations believe that global providers, operating at scale, can better protect their data compared to local providers

As seen, critical aspects of privacy program remain the complexity of the legal landscape and the emergence of AI processing, in particular GenAI

Link here


The State of Data Privacy In 2024

ISACA infographic

Link here


Swedish DPA Annual Report 2023 - focusing guidance on innovation

The Integritetsskyddsmyndigheten (IMY) submitted its annual report to the government

Important focus

A) guidance to innovators: two completed projects on regulatory sandboxes

• federated ML in healthcare

• use of sensors as an alternative to camera surveillance to measure security in the public space

B) compliant handling and supervision

• it started over 200 cases and decided on sanctions on +11m EUR in 2023

C) work on law enforcement

Link here


Data protection in innovation - Swedish Agency for Digital Government

The Swedish Agency for Digital Government released in 2023 guidelines to implement data protection controls during the innovation process.

I collected the different pieces an put them into a single document. The translation is automatic and not checked.

Link here



ARTIFICIAL INTELLIGENCE

Governing AI in organizations

I've been updating a slide deck I used for presentations on AI Act.

In this release, I classified the requirements of High-Risk AI System (HRIAS) providers.

• PEOPLE

Some requirements concern the organization's employees or management and relate to the need to upskill or reskill individuals to manage AI effectively.

• AI GOVERNANCE

Others relate to the need to establish policies, processes, procedures and practices concerning the mapping, measuring, and managing AI functioning, quality, and risks, and ensuring that these document are implemented. These are not in general system-specific and apply across the organization.

• AI SYSTEMS

These are necessary requirements that particular systems should have before putting them into the market.

• AI ACCOUNTABILITY

These requirements relate to the responsibility of the HRIAS provider and the ability to demonstrate compliance with the AI Act.

This classification allows organizations subject to the (to be) AI Act to create a roadmap and prioritize certain activities. For instance, AI literacy should be started as soon as possible. Other activities that can be system-specific (for instance accuracy) or company-wide (risk management system) should be implemented sometime before the AI system is launched (start working one year ahead for instance, considering the usual AI development lifecycle). Finally, others can only implemented once the AI Act is applicable, for instance, registration of the HRIAS.

NB: AI inventory is not an express requirement in the AIA. Instead, it can be found for instance in NIST AI RMF clause 1.6. However, the AI inventory is a crucial instrument for AI operators and enables a 360 view of organizational AI assets.

Link here


ISO 42005 - AI System Impact Assessment is open for comments

ISO 42005 guides organizations performing AI system impact assessments for individuals and societies that can be affected by an AI system and its intended and foreseeable applications.

It also includes information about

- how and when to perform AI IA

- how the AI IA can be integrated into an organization’s AI risk management and AI management system (ie, integration with ISO 23984 and ISO 42001)

On the integration of the different standards, it is worth mentioning that

- AI Risk Management is an overall management activity that addresses the organization as a whole, including how developers develop AI systems, how AI providers manage customer relationships, how AI deployers use AI systems

- AI Impact Assessment addresses the potential impact of AI systems, but it is limited to particular individuals or societies and to concrete AI use cases

It also addresses the integration with other impact assessments such as privacy impact assessment, human rights impact assessments, etc

In annex E includes an AI system impact assessment template

Link here


Generative AI and IP rights

One of the critical aspects of the development and use of GenAI is its potential to infringe on intellectual property rights

The World Intellectual Property Organization – WIPO issued a guidance on the major risks and mitigations

To mitigate the risks of unauthorized access to Confidential Information

- check tool settings

- AI tools on private clouds

- evaluate service provider monitoring practices

- access limitation (when using confidential information)

- draft policies and upskilling

- conduct security assessments of tools

To mitigate the risks of infringing IP rights

- use tools trained on licensed, public domain or user's own data

- check indemnities provisions against IPR claims in contracts

- dataset vetting when training or fine-tuning

- record-keeping genAI training

- policies and training for staff

- avoid prompting about copyrighted work, trademarks, etc

- implement controls, like plagiarism checkers, image searchers, etc

- evaluate more mitigation measures

Open source risks

- use tools trained on licensed examples

- use tools that offer indemnities against open-source infringements

- vendor vetting

- Adopt a risk-benefit approach to generative AI use in coding

Deepfakes

- restrict the use of genAI to create deep fakes

- if there are legitimate business reasons to generate deep fakes, obtain consent/license

Risks related to IP rights and GenAI outputs

- review the GenAI tool T&C to understand who owns the output

- enhance control over outputs (eg, modifying outputs)

- document human role in the creation process

- negotiate ownership of outputs

- when commissioning works, guard against the use of GenAI

- avoid using genAI where IP rights are essential


Link here


European AI Office

On 24th Jan 2024 the EU Commission established the European AI Office

The European AI Office will operate within the Commission and will have, among others, the following tasks

• Supporting the AI Act and enforcing general-purpose AI rules

• Strengthening the development and use of trustworthy AI

• Fostering international cooperation

• Cooperation with institutions, experts and stakeholders

Also important to stay tuned!

The AI Office will soon start recruiting talents with a variety of backgrounds for policy, technical and legal work and administrative assistance. A call for expression of interest will be published, through which interested candidates are encouraged to apply.

Link here


Introduction to AI Assurance (UK Gov)

The UK Gov issued a guide for practitioners interested in finding out how assurance techniques can support the development of responsible AI.

The guidance refers to the principles to responsible development and use of AI

- Safety, Security and Robustness

- Appropriate Transparency and Explainability

- Fairness

- Accountability and Governance

- Contestability and Redress

These principles set specific objectives (the “what”) that AI systems should achieve.

AI assurance techniques and standards support organizations and regulators to understand “how” to operationalise these principles in practice, by providing agreed-upon processes, metrics, and frameworks to support them to achieve these goals.

It identifies six AI assurance mechanisms

- Risk assessments (identification, evaluation and mitigation of risks)

- Algorithmic impact assessment (evaluation of wider effects of the system in the environment, equality, human rights, etc)

- Bias audit (determination if there are unfair biases in any stage of the AI lifecycle)

- Compliance audit (adherence to regulations or internal policies)

- Conformity assessment (demonstration of whether the product meets relevant requirements)

- Formal verification (evaluation of specific requirements, often using mathematical methods)

Key actions for organizations

1. Consider existing regulations

2. Upskill employees

3. Review internal governance and risk management

4. Seek now regulatory guidance

5. Consier involvement in AI standardization

Link here


City of San Jose (CA) AI Reviews & Algorithm Register

The City of San Jose (California) developed a simple AI Review Framework.

Before procuring AI solutions the Digital Privacy Office (DPO) evaluates the benefits and risk of the AI system.

It also created a Vendor AI FactSheet template, which is based on the IBM AI FactSheet 360 project.

In the same webpage, the city publishes the Algorithm Register, which is the collection of AI systems used by the municipality.

The register also includes a gunshot detection system piloted by the city.

Link here


Responsible Use Guide for LLMs (Meta)

Meta launched guidance for responsible development for products powered by LLM to support the developers for the responsible development of downstream LLM-powered features.

Different from a foundation model (general purpose AI system), an LLM-powered product has a defined use case and performs specific tasks

to enable an intended use or capability through a user interface, sometimes embedded in products. An LLM-powered system encompasses both the foundation model and several product-specific layers.

It is suggested that developers examine each layer of the product to determine which potential risks may arise based on the product objectives and design, and implement mitigation strategies accordingly

After pretraining, the model can reproduce everything from simple grammatical rules to complex nuances like context, sentiment, and figurative language. However, the model only learns to predict the next word in a sentence based on the patterns in its training data.

Responsible LLM product development stages

1) Determine use case

The decision in the development process is which use case(s) to focus on (eg. customer support, AI assistants, internal productivity tools, entertaining end-user experiences, or research applications)

2) Fine-tune for product

Product-specific fine-tuning enables developers to leverage pretrained models or models with some fine-tuning for a specific task requiring only limited data and resources. Developers can further train the model with domain-specific datasets to improve quality on their defined use case. Fine-tuning adapts the model to domain- or application specific requirements and introduces additional layers of safety mitigations.

Steps to responsibly fine-tune

2.1) Define content policies and mitigations: The content policy defines what content is allowable and may outline safety limitations on producing illegal, violent, or harmful content

2.2) prepare data: prepare and preprocess datasets that are representative, consider risks of biases, privacy and security

2.3) Train the model: set hyperparameters and adjust as necessary. Techniques can involve reinforcement learning from human feedback (RLHF) or AI feedback (RLAIF)

2.4) Evaluate and improve performance: assess the fine-tuned model in the testing dataset and measure performance. Evaluation strategies include manual or automatic evaluation and red teaming.

3) Address input- and output-level risks

Mitigations at input level are safeguards related to the information the LLM user provides and passes to the system (eg. prompt filters and prompt engineering). Mitigations at the output level are controls to detect and filter the generated output for problematic or policy-violating content (blocklists and classifiers)

4) Build transparency and reporting mechanisms in user interactions

Incorporate feedback and reporting mechanisms: eg. thumbs up/down, AI assistants, forms, etc and enhance transparency and controls.

Link here


Updates on AI4Gov project

The AI4Gov project is an EU-funded project that is aimed at exploring the possibilities of AI and Big Data technologies for developing evidence-based innovations, policies, and policy recommendations in a responsible manner.

Within we project we provided legal and ethical advice to the development of AI solutions. A really exciting project that will produce tangible results and develop trustworthy AI solutions.

For more information about the project check the following link https://lnkd.in/dQBWZ98i



Privacy Summits in UAE

At White Label Consultancy we have organized and supported the organization of two very important privacy events in the UAE

Privacy Summit Dubai 2024 (4th March) - DIFC Academy

The event is fully booked and registrations are now closed

I will participate both as a speaker and moderator

The program can be consulted here


Privacy Summit Abu Dhabi 2024 (7th March) - ADGM

The event is still open for registration but I'd suggest booking your place soon.

I will participate both as a speaker and moderator

The program can be consulted here


You can unsubscribe from this newsletter at any time. Follow this link to know how to do it.


ABOUT ME

I'm a data protection consultant currently working for White Label Consultancy. I previously worked for other data protection consulting companies.

I'm specialised in the legal and privacy challenges that AI poses to the rights of data subjects and how companies can comply with data protection regulations and use AI systems responsibly. This is also the topic of my PhD thesis.

I have an LL.M. (University of Manchester), and I'm a PhD (Bocconi University, Milano).

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“ and "Privacy and AI". You can find the books here

Exploring the balance between innovation and privacy in AI truly echoes the wisdom that progress does not have to come at the expense of ethics. 🌱💡 - Think like Elon Musk, innovate responsibly.

Like
Reply
Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

9mo

Federico Marengo thank you for sharing!

Thank you Federico Marengo. Very informative as usually!

To view or add a comment, sign in

More articles by Federico Marengo

  • Privacy and AI #19

    Privacy and AI #19

    In this edition of Privacy and AI SUCCESSFUL AI USE CASES IN ORGANIZATIONS • Successful AI Use Cases in Legal and…

    1 Comment
  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION • California AI Transparency • ICO consultation on the application of…

    5 Comments
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI • Privacy & AI book giveaway • LLMs can contain personal information in California •…

    4 Comments
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI • AI & Algorithms in Risk Assessments (ELA, 2023) • Hamburg DPA position on Personal…

    6 Comments
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI • Generative AI and EU Institutions (EDPS) • Supervision of AI systems in the EU (NL…

    4 Comments
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY • Privacy and AI for AI Governance Professional (AIGP) certification •…

    7 Comments
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY • FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 Comments
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY • Purpose limitation in the GenAI lifecycle (ICO call for evidence) •…

    9 Comments
  • Privacy and AI #10

    Privacy and AI #10

    In this edition of Privacy and AI: PRIVACY • A fine for not conducting a DPIA • The legal basis for web scraping to…

    11 Comments
  • Privacy and AI #9

    Privacy and AI #9

    In this edition of Privacy and AI: PRIVACY • EDPB bans Meta's processing PD for behavioral ads using legitimate…

    1 Comment

Insights from the community

Others also viewed

Explore topics