Navigating the legal challenges of AI: Privacy & Ethical Concerns
Introduction:
Artificial Intelligence has become an integral part of everybody’s lives as it has revolutionized the way we work, communicate, and live; by making everything more convenient and efficient. However, this rapid integration of AI into our daily lives has led to a plethora of legal challenges related to data privacy and ethical concerns.
Challenges of AI
Data Breach:
Modern generative artificial Intelligence and Machine Learning systems heavily rely on data sets for the generation of results. and these data sets can require personal and sensitive information which may lead to privacy violations. The DPDPA has provisions for preventing data breach in the general sense but given the sophistication of Artificial Intelligence it can be difficult to implement these.
Surveillance:
The inclusion of AI technology has also begun in areas such as surveillance, facial recognition, predictive policing etc. In India, a country where the existence of biometric identity tool such as Aadhar was critiqued for its possible scope of intrusion on privacy of the civilians, the inclusion of AI technology can cause even more risk of facilitation of mass surveillance without consent.
Algorithmic Bias:
The reliance of AI systems on Data Sets can lead to the existence of an algorithmic bias. There is a very real possibility that the base data sets of AI systems can contain information and statistics based on distinctions of caste, race religion etc., and results generated through analysis and processing of these data sets can be discriminatory. Such discriminatory results can perpetuate discriminatory practices, such as rejections in hiring or degradation of credit scores based on an individual’s caste or religion, when AI systems are involved in those sorts of duties. A lack of relevant and direct provisions to prevent and penalise such algorithmic bias also leads to the addition of another concern in regards to these biases.
Accountability & Transparency:
AI algorithms are called black boxes as they are not easy to comprehend and even their developers face problems in understanding their development once, they start evolving through acquisition of more data through their user inputs and the internet. This lack of understanding leads to difficulty in challenging the AI decision and results. This lack of clarity and transparency leads to a lack of accountability. And the absence of specific provision addressing the same further escalates this issue of determining responsibility of AI’s decisions and results.
Ethical Use in Governance and Public Sector:
The Indian government is increasingly adopting AI in governance, from digital infrastructure projects to public welfare schemes. However, ethical use in governance requires a balance between leveraging AI for societal benefit and ensuring that individual rights are not infringed upon. For example, AI in welfare distribution could potentially eliminate inefficiencies but also raises concerns about exclusion and fairness if the technology fails or is misused.
Legal Challenges of AI:
Absence of an overarching Regulatory Framework-
AI regulatory framework in India is still in its nascent stages. While the NITI Aayog has released various discussion papers on AI, and the government is working on a National AI Strategy, there is no overarching legal framework to regulate AI’s use for now.
Intellectual Property Rights of AI Created Work-
Another emerging legal concern regarding the Intellectual Property Rights of AI created works due to the fast-paced development of Generative AI systems that have the potential to create novel images, videos, music, etc. A question that comes up in this regard is whether AI can be the author of these works created by it, if not, then who, the owner of the AI software, the developer or the user. And the Indian IP framework at this point is not equipped with definite provisions to answer these questions.
Recommended by LinkedIn
DPDP Act & AI development:
Despite there being no direct regulatory provisions related to AI in India. The intersection of AI and the Indian Legal framework, especially in terms of privacy, comes at the Digital Personal Data Protection Act, 2023. To develop AI systems there is a need of an extensive dataset which can only be made by processing all the data relevant to that specific AI system, and sometimes that relevant data can include personal information about a person, that is where the DPDPA steps in as a regulatory provision. AI model developers and deployers will need to carefully consider the DPDPA’s regulatory scope concerning the processing of personal data, the limited grounds for processing, the rights of individuals regarding their personal data, and the possible exemptions available to train and develop AI systems.
Section 6 of the DPDPA talks about consent as an essential element in processing of personal data, so in the case of AI developers they will have to acquire explicit consent of individuals before they start processing personal information for their data sets. There are exemptions to this as mentioned in Section 7 of the act, these exemptions include situations where there is voluntary provision of data, or use of data processing for emergencies, or for certain government services.
If data is publicly available (like on social media or directories), AI developers can use it without needing consent. However, they still need to ensure that the data scraping (collecting data from the web) follows legal guidelines and does not infringe on privacy rights. As per section 3©(ii) of the act.
The DPDPA is not all stringent as it also provides certain exemptions, when necessary, one such exemption is in the case of research under section 17(2)b of the act. This exemption is only allowed when the standards are met but no exact standards have been set by the Data Protection Board, circling back to the concern of lack of provisions.
Another major way in which DPDPA and AI development coincide with each other is the section 10 of the act which is about Significant Data Fiduciaries (SDFs), large or sensitive data processors, who are required to have additional obligations like appointing of Data Protection Officers and conducting Data Protection Impact Assessments (DPIAs). This classification helps in monitoring high-risk data processing, which can very easily include AI companies as the amount and sensitivity of the data they deal with is extremely high. But again the government has yet to define clear thresholds for what makes a company an SDF.
Addressing the Concerns:
The foremost step that the respective Indian authorities need to take in order to address the legal challenges of AI, is to develop and implement a robust regulatory framework tailored towards AI. This needs to include AI specific legislations that puts strict checks on what data AI extracts from its users, whether and to what extent does this extraction need to have explicit consent, in order to both protect the privacy of individuals and promote AI development and innovation.
Also, laws should be made that help understand and determine the liability in cases where an accident or mishap takes place and AI is involved, making AI more accountable. And comprehensive IPR laws should also be made that give a clear provision on the ownership rights of AI developed creations.
Conclusion:
Despite the convenience and efficiency that AI brings to the table, we as users should be weary and mindful when using AI as it is constantly improving and evolving, sometimes even beyond the comprehension of its developers, thus increasing the risk of data oversharing and privacy violations. The respective authorities on the matter should also make definitive legislations that cover the legal concerns related to AI as formulation of these legislations, especially at this nascent stage at which AI is in India, can prevent a lot of future harm and confusion from happening while also creating an environment that promotes development of AI.
India as of now lacks a definitive regulation in terms of AI but has a plethora of specific guidelines and initiatives in place, with the purpose of careful development and use of AI.
National AI Strategy launched by NITI Ayog- it promoted an inclusive approach towards AI deployment. This involves the identification of sectors of national priority such as healthcare, education, etc., and then the creation of high-caliber data sets for research and innovation.
Principles for Responsible and its Operationalizing AI by NITI Ayog