ECNL's Vanja Skoric joins over 100 participants today at The Athens Roundtable on AI and the Rule of Law, in Paris organised by The Future Society to discuss #AI governance landscape, accountability, the role of #CivilSociety and to propose concrete objectives and deliverables for 2025 France’s AI Action Summit.
The design, development, procurement, deployment, use and decommissioning of AI systems without adequate safeguards, or in a manner inconsistent with international law, pose actual risks that can undermine the protection, promotion and exercise of #HumanRights and fundamental freedoms, #democracy and #RuleOfLaw. Take the most recent example – (un)democratic nature of elections in Europe, annulled consequently by the state's Constitutional Court due to illegal AI interference. Moreover, civil society communitites - especially marginalised and vulnerable groups - are most impacted by AI, but are rarely heard in the shaping of policies and rules.
Two key points need to be addressed:
🔹 AI and policies and regulations must integrate and exercise democratic, accountable, rights-based and participatory standards and approaches.
🔹 AI must be developed with meaningful input from diverse civil society actors.
We argue that any new AI governance mechanisms or processes must
🔹be open, inclusive and transparent in their design, and
🔹facilitate meaningful stakeholder engagement in particular from Global Majority, and
🔹not duplicate or delegitimise existing arenas like WSIS and the IGF.
We urge coordination and complementarity, seeking synergy with these established fora. We also call on the multilateral bodies in particular to ensure effective, system-wide coherence and collaboration, including with key UN bodies such as the OHCHR to ensure strong guidance relating to the application of international human rights law to the AI development and use.
❗ On a more practical note, we call for a global-wide mandatory assessment of the impacts of AI systems prior to development and use. This ensures we can identify, assess, manage, and address potential risks to human rights, equity, fairness, democracy and safety before negative consequences, and refrain from deploying AI systems where risks are incompatible with the protection of international human rights and core freedoms.
We argue this is a paramount component of AI safety, and urge new AI safety institutes to include also human rights risk and impact assessment into their testing and evaluations.