The non-extractive version of AI.
edition sixty-three of the newsletter data uncollected

The non-extractive version of AI.

Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data we missed and are yet to collect. In this newsletter, we will talk about everything the raw data is capable of – from simple strategies of building equity into research+analytics processes to how we can make a better community through purpose-driven analysis.

********************************************************

After a few months of de-newsletter-ing and knowing too much about the dragons of the House of the Dragon, I am back.


But you and I not meeting in this space for a few months doesn't mean AI didn't continue. In fact, I got 50+ emails from different companies talking about how wildly successful they are in implementing AI. I was also invited to 7 different AI-related events, with many of them (if not all) sending out loud messages like "If you don't buy a subscription to the tool XYZ, consider yourself replaced already."


So, the way I see it, you and I still have work to do. The overwhelm, the fear, the pain, and yes, some cautious hope and optimism – all of that is still very real.


This is why I want to discuss "non-extractive AI" in this edition. Is that even possible?


There is the debate of human vs. non-human generated data training AI, the issue of consent in data collection, transparency in model testing, context, and voice in building autonomous systems – I mean, there are too many angles to look into if we want to understand a non-extractive version of AI. So, I am scoping this topic to what should different roles around AI do (as their actions) when thinking of a non-extractive version of AI.


Let's start by defining this term to benefit our scope. Non-extractive AI refers to systems that do not exploit, create harm, or deepen inequities from the inputs and outputs they deal with. Instead, these systems are designed and used in a way that respects individuals' privacy, autonomy, and rights.  


Some example questions to ask an AI system to determine if it is non-extractive can

  1. Does the AI system communicate clearly and transparently how data is collected, used, and stored?
  2. Does the AI system obtain explicit and informed consent from data providers and data systems?
  3. Does the AI system implement and communicate measures to protect personal information?
  4. How is the AI system designed to ensure fairness is included in every step of the system?
  5. Does the AI system collect only the data necessary for a specific purpose, avoiding excessive data collection? Is this clearly shared with everyone interested and working with the system?


Let's take social media as an example. Social media platforms can adopt a version of non-extractive AI by ensuring transparency in data usage policies, providing users with control over their data, and compensating users for the value generated from their contributions. For example, a social media platform could offer users the option to participate in data-sharing programs in exchange for premium features or ad-free experiences.

Let's explore what we can do to participate in building this non-extractive AI system.


Scenario 1: If you are a user of an AI system

  1. Build comfort and confidence through education: Users must be educated about their rights and the implications of data sharing. Awareness campaigns and accessible information can help users understand how their data is used for the benefits and risks involved. Knowledgeable users are more likely to make informed decisions and demand ethical practices.

2. Learn and execute informed consent: Users should be provided with clear, understandable options for data sharing. Consent forms should not be buried in legal jargon but should instead be straightforward and transparent. Users should be able to learn and execute their choice to opt in or out of data sharing and be aware of how their data will be used.

3. Commit to active participation: Users can take a proactive role in data conversations to share their voice. This could include participating in advisory boards, engaging in feedback loops, and having a say in how their data is utilized. Active user participation can guide AI development towards more ethical practices.


Scenario 2: If you are a data collector (data that can be used for AI systems)

  1. Push for transparent transactions: As "data transactors"—those who collect, store, or sell data—must operate with transparency. This includes disclosing how data is collected, stored, used, and shared. Transparency builds trust and ensures that all parties are aware of the data lifecycle.

2. Choose data minimization: Choosing data minimization means collecting only the necessary data for a specific purpose. This reduces the risk of data misuse and ensures that the data collected is relevant and essential.

3. Explore and implement fair compensation: When users' data is used for AI training and testing purposes, the users should be fairly compensated. This could be in the form of financial compensation, access to premium services, or other benefits. Fair compensation acknowledges the value of users' data and promotes a more equitable data economy.


Scenario 3: If you are a designer of AI systems

  1. Commit to privacy in design: AI system designers must integrate privacy into the core of their designs. Prioritizing privacy principles ensures that data protection is a foundational aspect of the system rather than an afterthought. This includes using encryption, anonymization, and other techniques to protect user data.

2. Lean into ethical AI frameworks: Designers should follow ethical AI frameworks, prioritizing user rights and data ethics. These frameworks provide guidelines on handling data responsibly, ensuring that AI systems are developed with respect for human values.

3. Community-Centric Design: Designing AI systems with the community at the center ensures that the technology meets the needs and respects the rights of those it serves. This includes the entire community, not just the majority. This means the community must be transparently included in the design process, conducting user testing, and continuously seeking user feedback to improve the system. Including the community in designing and testing AI systems creates a level of transparency that, in turn, helps build trust and accountability.

·        Implement responsible data analysis: Designers, data scientists, analysts… and others who work closely with the data in/from AI systems must commit to responsible data analysis practices. This includes being aware of biases, ensuring data accuracy, and avoiding manipulative techniques. Responsible analysis promotes integrity and trust in AI systems.


Scenario 4: Generally, as the human of this sector working with data and AI systems…

  1. Push for data policies and values in your organization: Organizations should develop and implement clear policies outlining how data will be collected, used, and protected. These policies should be communicated to all people affected, impacted, and responsible for this data – so that these policies and values can be regularly reviewed.

2. Commit to engaging with AI ethically and responsibly: Committing to ethical AI practices and technologies can drive the development of non-extractive AI. This includes voicing our interest in new methods for data protection, developing ethical guidelines, and continuously learning about the societal impact of AI.

3. Foster an inclusive culture: It is crucial to create an organizational culture that values inclusion. This is primary because AI is a subject that involves learning, experimentation, and comfort with occasional failures. So, we need cultures where diverse voices can be heard during these experiments and learning moments.

4. Hold AI vendors accountable to ethical practices: Hold your AI vendors responsible for how, what, and why they are designing their AI systems. Ask them questions before you accept things as-is.


********************************************************

The challenge with envisioning a "non-extractive" version of AI is that you and I never got a chance to dream and make our own choices around AI. So, we don't necessarily know what that version is yet. If we are going to achieve a true version of "AI for good"—that does not cause harm and divide—we will need to learn to distance ourselves from the "here is how you operate with AI" manuals given by big tech companies.


We will need to patiently keep digging deeper to explore how to build a meaningful partnership with AI. Part of this will require us to sit uncomfortably and imagine what our world could look like with "good artificial intelligence," and the other part will require us to actively and continually work on barriers that our systems have created up until now in the name of progress.


We need a different vision for the non-extractive version of AI – one that is not inspired and motivated solely by higher and bigger dollar figures – but one that makes these ideas and possibilities of artificial intelligence within the reach of our people and communities.


And I want to make this clear—this edition is not a vague dream. This is a clear call to action for you and me—because nothing is more important than protecting our collective humanity for the future, and because I believe in the magic that happens when good intentions and imagination meet.


This isn't easy – but then again, since when did you and I explore things in this edition that were easy?

********************************************************


*** So, what do I want from you today (my readers)?

  1. Have you participated in the Data and AI Equity study 2024? This 10-minute, anonymous survey aims to collect insights on how nonprofits engage with AI technologies, their understanding of data equity, and their preparedness to integrate AI ethically and effectively into their operations. If you work in the nonprofit sector, please take the survey.

2. Share with us: what is one thing you are taking away from this edition.


Tasha Van Vlack

Community Builder, Nonprofit Matchmaker, Engagement Enthusiast - CEO at The Nonprofit Hive

4mo

I cant possibly have a favourite newsletter of yours Meenakshi (Meena) Das but if I did this one would be up there…

Ken Wyman

Active consultant/trainer in fundraising, Professor Emeritus Humber College grad school

5mo

Wow! This is the first AI newsletter I've read that is focused on ethics, written in plain English, and clear on what action humans should take. The social good sector needs this! "Non-extractive AI refers to systems that do not exploit, create harm, or deepen inequities from the inputs and outputs they deal with. Instead, these systems are designed and used in a way that respects individuals' privacy, autonomy, and rights."

To view or add a comment, sign in

Insights from the community

Explore topics