Privacy and AI #2

Privacy and AI #2

In this edition of Privacy and AI 

PRIVACY 

  • ICO Fines It's OK Limited £200,000 for Unlawful Marketing Calls
  • Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutional
  • BIPA Claims Accrue With Each Scan
  • The Italian SA ordered Luka Inc. the suspension of the data processing activities using Replika chatbot. 
  • Telemarketing: direct marketers must immediately register the opt-out. EUR 4.9m fine
  • LIBE Committee Submits Draft Motion, Concludes that EU-US DPF Fails to Provide Equivalent Protection
  • AUSTRALIA - AG Department Issues Report on Privacy Act Review Asks for Public Feedback
  • Texas Governor Abbott Announces Statewide Plan Banning Use Of TikTok
  • Portugal - CNPD Publishes Guidelines on Technical and Organisational Measures on Processing of Personal Data
  • FTC Enforcement Action to Bar GoodRx from Sharing Consumers’ Sensitive Health Info for Advertising
  • ISO 31700:2023 - Privacy by design for consumer goods and services


ARTIFICIAL INTELLIGENCE

  • NATO Announces Development of AI Certification Standard
  • Data poisoning and chatbots



PRIVACY

ICO Fines It's OK Limited £200,000 for Unlawful Marketing Calls

The ICO has imposed a £200,000 fine for a violation of the Privacy and Electronic Communications Regulations (PECR) to It’s OK ltd since the company made calls to individuals who had not told the company that they were willing to be contacted. 

It’s OK Ltd have been found to have made 1,752,149 nuisance calls over the period 1 July 2019 to 1 June 2020 to people registered with the Telephone Preference Service (TPS), representing an average of over three calls every minute.

No company can make a live marketing calls to anyone who is registered with the TPS, unless they have told the specific organisation that they do not object to receiving calls from them.

Press release here



Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutional

The German Federal Constitutional Court declared that the provisions in those legislations that authorise the police to process stored personal data through automated data analysis (Hesse) or automated data interpretation(Hamburg) are unconstitutional.

No alt text provided for this image

They allow linking previously unconnected automated databases and data sources in analysis platforms and permit systematic access of data across sources through searches.

They did not pass the proportionality test.

1) The principles of purpose limitation and change in purpose apply

- According to these provisions, personal data can be subjected to further processing that is in line with the original purpose as well as processing with a change in purpose. Both provisions allow for the processing of large amounts of data, essentially without differentiation as to the source of the data or the original purpose of its collection

2) Automated data analysis and interpretation amounts to a separate interference.

- The further processing of data that has been collected and stored can result in new detrimental effects, which might be more onerous than the severity of interference of the original data collection.

2.1) Automated data analysis or interpretation is directed at generating new intelligence

- The authorities can generate far-reaching intelligence from available data through the use of practically all of the existing IT methods and also deduce new connections by way of data analysis.

2.2) The powers in question allow the automated processing of unlimited amounts of data by means of methods that are not circumscribed by law.

- They allow the police, with just one click, to create comprehensive profiles of persons, groups and circles. 

- They may also subject many persons who are legally innocent to further police measures, if their data was collected in some context and the automated evaluation of this data leads the police to wrongly identify them as suspects..

- They have virtually no restrictions on the type and amount of the data that can be used for data analysis or interpretation. 

- They do not set out what types of data and what data records may be used for automated analysis or interpretation. In particular, they  do not differentiate between persons for which there are reasonable grounds to assume that they could commit a criminal act or those that have a particular connection to such persons, and others as to which no such grounds exist.

Given the particularly broad wording of the powers, in terms of both the data and the methods concerned, the grounds for interference fall far short of the constitutionally required threshold of an identifiable danger, and thus are unconstitutional

Link here



BIPA Claims Accrue With Each Scan

According to the Illinois Supreme Court, a claim accrues under the Biometric Information Privacy Act (BIPA) with every scan or transmission of biometric identifiers or biometric information without prior informed consent.

No alt text provided for this image

A restaurant chain scanned fingerprints to authenticate employees’ entry to their required its employees to scan their fingerprints to access their pay stubs and computers. A third-party vendor then verified each scan and authorized the employee’s access.

According to s15(b) BIPA, before collecting/using #biometric identifiers or biometric information (BI), the entity must: 

• Inform the person in writing

•• what data is collected/processed (e.g fingerprint)

•• the specific purpose and retention period (e.g. for authentication only for XX days)

• Obtain express consent (‘written release’)

And before disclosing BIs to a third party the entity must obtain consent from the individual (s15(d) BIPA)

The Illinois SC ruled that a claim accrues under BIPA with every scan (s15(b) BIPA) or transmission (s15(d) BIPA) of biometric identifiers or biometric information without prior informed consent.

This means that every time an employee used the scans for authentication, a BIPA breach occurred. The same can be said about every transfer to the third-party vendor

BIPA grants the party the right to recover against the infringing entity $1.000.

This interpretation can have massive implications for those using biometric systems in Illinois without asking for consent before processing biometric identifiers.

A relevant aspect of BIPA is that it applies entirely regardless of the purpose for which a biometric identifier (e.g. fingerprint, voice) is collected/processed. In contrast, GDPR increases the protection of biometric data only if the purpose is to uniquely identify a person (art. 9(1) GDPR)

Link here



The Italian SA orders Luka Inc. the suspension of the data processing activities using Replika chatbot. 

Replika is a chatbot companion powered by artificial intelligence. Users can create their own unique chatbot AI companion, choose a 3D avatar, help the bot develop its personality, and talk about their feelings, calm anxiety among others. Users can decide if they want Replika to be their friend, romantic partner or mentor.

No alt text provided for this image

The authority pointed out some risks that the app could carry to children, in particular, the fact that replies are absolutely inappropriate to their age.

There were no verification mechanism in place, despite claiming in the privacy notice that Replika does not collect knowingly information from children below 13 years old. 

Replika features (stress management, search of love, socialization) increase the risks to the vulnerable individuals concern (including children)as they can mostly be traced back to actions on an individual’s mood

The privacy notice does not disclose key elements required by art 13 GDPR. While for instance, the lack of legal basis can be inferred (contract), in the case of children is not a valid legal basis for the services concerned under Italian law. 

For these reasons, the authority ordered the immediate suspension of the processing 

Link here



Telemarketing: direct marketers must immediately register the opt-out. EUR 4.9m fine

Principle

If the user says "no" to the unwanted marketing call, the call center or the company that contacted him must immediately register their will and remove the name from the lists used for telemarketing.

The opposition/opt-out expressed during the phone call does not have to be confirmed by email or other methods and it is also valid for future promotional campaigns.

Third party lists: verify that consent was free, specific, informed and documented, through appropriate random checks, and check that the data is treated in full compliance with privacy legislation

No alt text provided for this image

Findings

The Italian DPA found various wrongdoings committed by Edison Energia SpA towards a significant number of users, including:

  • the receipt of telephone calls without consent;
  • failure to respond to requests to stop receiving unwanted calls;
  • the impossibility of expressing free and specific consent for different purposes (promotional, profiling, communication of data to third parties) within the site or the app,
  • deficient or inaccurate information.

Decision

The Italian DPA ordered Edison to facilitate the DSR and to respond, without delay, to the requests, including those relating to the right to object and to pay a EUR 4.9m fine

More here



LIBE Committee Submits Draft Motion, Concludes that EU-US DPF Fails to Provide Equivalent Protection

The draft motion states that the Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities is not clear or precise, the interpretation of proportionality is too broad, ,does not prohibit bulk collection, does not address CLOUD Act and Patriot Act, the Data Protection Review Court does not meet Art. 47 Charter requirements, and remedies available for commercial surveillance are limited. Additionally, the USA lacks a federal data protection law. 

For these reasons, the LIBE Committee: 

•  concludes that the EU-US Data Privacy Framework fails to create actual equivalence in the level of protection

• calls on the Commission to continue negotiations with its US counterparts with the aim of creating a mechanism that would ensure such equivalence and which would provide the adequate level of protection required by Union data protection law and the Charter as interpreted by the CJEU

• urges the Commission not to adopt the adequacy finding

More here



AUSTRALIA - AG Department Issues Report on Privacy Act Review Asks for Public Feedback

The proposed reforms are aimed at strengthening the protection of personal information and the control individuals have over their information. 

These stronger privacy protections would support digital innovation and enhance Australia’s reputation as a trusted trading partner.

No alt text provided for this image

Entities should take appropriate responsibility for ensuring that their information handling practices are fair and not harmful. There should be greater protections for personal information before it is used in ways which have high privacy risks. Individuals need more transparency about what is being done with their information and more control over what happens with it.

The proposals would:

  • improve individual rights:
  • Better information available to individuals about how their information is processed
  • Provision of more rights (e.g. erasure)
  • More control in case of direct marketing and sale of data
  • require entities to
  • take appropriate responsibility for handling personal information fairly and reasonably
  • identify and mitigate risks before engaging in high privacy risk practices
  • strengthen privacy protections for children and vulnerable people
  • facilitate overseas transfers of personal information whilst ensuring that it is properly protected.

More info here and image here



Texas Governor Abbott Announces Statewide Plan Banning Use Of TikTok

Texas Governor announced a statewide model security plan for Texas state agencies to address vulnerabilities presented by the use of TikTok and other software on personal and state-issued devices.

It also developed a model plan to guide state agencies on managing personal and state-issued devices used to conduct state business.

Each state agency will have until February 15, 2023 to implement its own policy to enforce this statewide plan

The model plan outlines the following objectives for each agency:

- Ban and prevent the download or use of TikTok and prohibited technologies on any state-issued device identified in the statewide plan. This includes all state-issued cell phones, laptops, tablets, desktop computers, and other devices of capable of internet connectivity. Each agency’s IT department must strictly enforce this ban.

- Prohibit employees or contractors from conducting state business on prohibited technology-enabled personal devices.

- Identify sensitive locations, meetings, or personnel within an agency that could be exposed to prohibited technology-enabled personal devices. Prohibited technology-enabled personal devices will be denied entry or use in these sensitive areas.

- Implement network-based restrictions to prevent the use of prohibited technologies on agency networks by any device.

- Work with information security professionals to continuously update the list of prohibited technologies.

Prohibited Software/Applications/Developers (e.g.)

• TikTok

• Kaspersky

• ByteDance Ltd.

• Tencent Holdings Ltd.

• Alipay

• CamScanner

• WeChat

Prohibited Hardware/Equipment/Manufacturers (e.g)

• Huawei Technologies Company

• ZTE Corporation

More here



Portugal - CNPD Publishes Guidelines on Technical and Organisational Measures on Processing of Personal Data

The CNPD has issued guidelines for organizations on Technical and Organisational Measurs that can be adopted to minimize the consequences for data individuals’ rights when personal data is processed. 

The list a set of organizational measures and technical measures that can be considered by organizations in their risk mitigation plans include the following: 

Organisational measures (e.g.)

- Classification of information according to the level of confidentiality and sensitivity and adoption of appropriate organizational and technical measures for classification

- Adopt analysis procedures for monitoring traffic flows on the network;

- Carry out systematic IT security audits and vulnerability assessments (penetration tests); 

Technical measures (e.g.)

i. Authentication

- Use strong credentials with long passwords (at least 12 characters), unique, complex and with numbers, symbols, uppercase and lowercase letters, changing them frequently

ii. Infrastructure and systems

- Ensure that server and terminal operating systems are up to date, as well as all applications (eg browser and plugins)

iii. Email 

- Encrypt with a code, to which only the recipient has access, the emails and/or attachments sent that contain personal data

iv. malware protection

- Use secure encryption especially in the case of access credentials, sensitive data, data of a highly personal nature or financial data

v. Use of equipment outdoors

- Store data on internal systems, protected with appropriate security measures, and remotely accessible through secure access mechanisms (VPN)

vi. Storage of paper documents containing personal data

- Destroy documents using specific equipment that guarantees “safe” destruction

vii. Transport of information that includes personal data

- Adopt measures to prevent that when sharing personal data, the personal data is read, copied, altered or deleted in an unauthorized manner

See here



FTC Enforcement Action to Bar GoodRx from Sharing Consumers’ Sensitive Health Info for Advertising

The FTC has taken enforcement action for the first time under its Health Breach Notification Rule against the telehealth and prescription drug discount provider GoodRx Holdings Inc., for failing to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.

Findings

GoodRx committed several wrongdoings, including

  • Shared Personal Health Information (PHI) with Facebook, Google, Criteo and others
  • Used PHI to target its users with ads
  • Failed to limit third-party use of PHI
  • Misrepresented its HIPAA compliance
  • Failed to implement policies to protect PHI

Decision

The FTC ordered the company to implement several measures to protect PHI, including: a) Prohibition of the sharing of health data for ads; b) mandated consent for any other sharing; c) required the company to seek deletion of data; d) ordered the limitation of retention periods; e) implementation of mandated privacy program.

In addition, the company agreed to pay a USD 1.5m settlement

More information here


ISO 31700:2023 - Privacy by design for consumer goods and services

This standard establishes high-level requirements for privacy by design to protect privacy throughout the lifecycle of a consumer product, including domestic data processing by the consumer.

Privacy by design refers to several methodologies for product, process, system, software and service development. These methodologies take into account the privacy of a consumer throughout the design and development of a product, considering the entire product lifecycle - from before it is placed on the market, through purchase and use by consumers, to the expected time when all instances of that product finally stop being used. It means that a product has default consumer-oriented privacy controls and settings that provide appropriate levels of privacy, without placing undue burden on the consumer.

This standard can be a useful tool, in addition to guidelines provided by data protection authorities, to fulfil this important principle.

The ISO’s new standard on Privacy by Design includes two parts.  

  • ISO 37001-1:2023: High-level requirements for Privacy by Design 
  • ISO 37001-2:2023: Use cases to help understand these requirements 

Below is the table of contents of ISO37100-1:2023

No alt text provided for this image
ISO 31700-1:2023 table of contents
No alt text provided for this image
ISO 31700-1:2023 table of contents
No alt text provided for this image
ISO 31700-1:2023 table of contents

The standards can be found in the ISO webstore



ARTIFICIAL INTELLIGENCE

NATO Announces Development of AI Certification Standard

No alt text provided for this image

NATO’s Data and Artificial Intelligence Review Board (DARB) started the development of a user-friendly and responsible Artificial Intelligence (AI) certification standard to help industries and institutions across the Alliance make sure that new AI and data projects are in line with international law, as well as NATO’s norms and values.

Press release here



Data poisoning and chatbots

A short and very informative video of IBM Technology where it is explained 

• how chatbots work

• how a security threat can operate (data poisoning)

AI chatbots (like ChatGPT) are trained with a knowledge base (corpus) and this knowledge base is used to provide the answer to the user.

The knowledge base can be prone to attacks, one of which is data poisoning.

In datapoisoning attacks, adversaries try to manipulate training data in an attempt

• to decrease the overall performance (i.e., accuracy) of an ML model,

• to induce misclassification to a specific test sample or a subset of the test sample, or

• to increase training time.

This could potentially happen with any AI system, including ChatGPT.

No alt text provided for this image

At minute 6:45, he provided an example of a chatbot that was released into the internet after interacting with people, within a day, it started spouting all kind of offensive messages. He is, I assume, referring to “Tay”, a chatbot developed by Microsoft via Twitter on March 23, 2016. After posting racist, misogynist, and negationist comments, it was quickly shut down.

See here



Other topics

Global Data Conference 2023 (Milano)

Global Data Conference was an in-person event organised by Officine Dati , a think thank engaged in the discussion and awareness-raising of issues related to data protection, cybersecurity, AI and digital economy.

This time, it organised the first edition of Global Data Conference at the University of Milan. The event revolved around the work of four thematic tables with the participation of over 80 privacy professionals and representatives from the institutional, academic and BigTech fields.

The table discussions focused on:

1. EU digital strategy;

2. Role and powers of the national independent authorities;

3. Vulnerable subjects and effectiveness of protections;

4. Monetization of data.

No alt text provided for this image

It also benefited from the contributions of other global privacy professionals and academics, including Ann Cavoukian, Author of the Privacy By Design concept.

You can see the full interview here

You will find more information about Officine Dati on the webpage or LinkedIn ,



ABOUT ME

I'm a data protection consultant currently working for White Label Consultancy. I previously worked for TNP Consultants and Data Business Services. I have an LL.M. (University of Manchester), and I'm a PhD candidate (Bocconi University, Milano). As a PhD researcher, my research deals with the potential and challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of Artificial Intelligence. I also serve as a teaching assistant in two courses at Bocconi University.

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“, e-book released in 2021. You can find the book here

Silvia Gorlani

In-House Legal Manager | Counsel | Data Protection (Privacy) Officer @ Mediaworld | Maestro della Protezione dei Dati & Data Protection Designer®

1y

Thanks for highlighting the Global Data Conference held recently in Milan, to which I attended, among the 80 privacy professionals, focusing on the 3rd topic you mentioned. It was exciting and beneficial.

Peter Hense 🇺🇦🇮🇱

Technology // Data // ML // Competition // Litigation // Travel & Hospitality Industry // Co-host @RegInt: Decoding AI Regulation | Co-author of AI Act compact

1y

Are you going to Zuoz again in March?

Peter Hense 🇺🇦🇮🇱

Technology // Data // ML // Competition // Litigation // Travel & Hospitality Industry // Co-host @RegInt: Decoding AI Regulation | Co-author of AI Act compact

1y

Tea :-)

Cornelia Perron

So that private matters remain private. Your contact for advice and measures that guarantee discretion, confidentiality and privacy. For SME and private individuals.

1y

Thanks - huge effort to compile all this info in one newsletter!

Katharina Koerner

AI Governance @ Trace3: All Possibilities Live in Technology: Innovating with Responsible AI: I'm passionate about advancing business goals through AI governance, AI strategy, privacy & security.

1y

What a great newsletter on AI and privacy! 👏🏼

To view or add a comment, sign in

More articles by Federico Marengo

  • Privacy and AI #19

    Privacy and AI #19

    In this edition of Privacy and AI SUCCESSFUL AI USE CASES IN ORGANIZATIONS • Successful AI Use Cases in Legal and…

    1 Comment
  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION • California AI Transparency • ICO consultation on the application of…

    5 Comments
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI • Privacy & AI book giveaway • LLMs can contain personal information in California •…

    4 Comments
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI • AI & Algorithms in Risk Assessments (ELA, 2023) • Hamburg DPA position on Personal…

    6 Comments
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI • Generative AI and EU Institutions (EDPS) • Supervision of AI systems in the EU (NL…

    4 Comments
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY • Privacy and AI for AI Governance Professional (AIGP) certification •…

    7 Comments
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY • FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 Comments
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY • Purpose limitation in the GenAI lifecycle (ICO call for evidence) •…

    9 Comments
  • Privacy and AI #11

    Privacy and AI #11

    In this edition of Privacy and AI: PRIVACY AND AI GIVEAWAY (CLOSED) PRIVACY • Cisco 2024 Data Privacy Benchmark Study •…

    3 Comments
  • Privacy and AI #10

    Privacy and AI #10

    In this edition of Privacy and AI: PRIVACY • A fine for not conducting a DPIA • The legal basis for web scraping to…

    11 Comments

Insights from the community

Others also viewed

Explore topics