In a whirlwind reminiscent of Silicon Valley's finest drama, OpenAI finds itself in disarray. CEO Sam Altman was abruptly ousted, co-founder Greg Brockman resigned, and employees teetered on rebellion. While Microsoft, led by Satya Nadella, seizes this golden chance—potentially onboarding key talent and solidifying its AI dominance—ethical AI advocates perceive an alarming sidelining of moral considerations in favor of power and profit. Investors remain wary of potential lawsuits, and questions linger around intellectual property transfer and confidentiality agreements amidst this shuffle. Can Microsoft innovate responsibly, or will consolidation stifle ethical AI discourse? As OpenAI’s fate hangs in the balance, one must ask: Does the future of ethical AI lie in corporate hands—or is it time for a global regulatory framework to take the reins? #AIethics #OpenAI #Microsoft Learn more in this article:https://lnkd.in/d7QtGXAy
Cedra Trust
Tecnología, información e internet
Barcelona, Barcelona 111 seguidores
Exploring GEN-AI ethical pathways with you.
Sobre nosotros
Cedra Trust is a company fostering an ecosystem for researchers, institutions, communities, and governments to develop safe practices around GEN-AI integration and quantifing its real impact.
- Sitio web
-
https://www.cedra.ai/
Enlace externo para Cedra Trust
- Sector
- Tecnología, información e internet
- Tamaño de la empresa
- De 11 a 50 empleados
- Sede
- Barcelona, Barcelona
- Tipo
- Asociación
- Fundación
- 2022
- Especialidades
- Ethical Frameworks, Capacity building, Impact assessments y GEN-AI
Ubicaciones
-
Principal
C/Papin n33
08028
Barcelona, Barcelona 08028, ES
Empleados en Cedra Trust
Actualizaciones
-
In a bid to narrow the AI capability gap between academia and industry, Amazon Web Services is offering $110 million in grants and Trainium chip credits. This ambitious effort, led by AWS’s Gadi Hutt, touts accessible compute power for researchers while critics, such as academic Os Keyes, question whether this philanthropic facade masks corporate influence over research priorities. The program’s opaque selection criteria and the historical entanglement between private funding and bias in AI scholarship stir valid concerns about the balance of power and academic integrity. Interestingly, while this initiative could equip under-resourced academic labs, it also centralizes dependence on private infrastructure. This raises a sharp ethical question: can academia ever truly lead AI’s ethical frontier when it relies on the technical tools—and approval—of corporate tech giants? #AIethics #CorporatePower #AcademicIntegrity Learn more in this article:https://lnkd.in/d34ZNe8t
-
Google’s newly unveiled PaliGemma 2, an AI model claiming to 'identify' emotions from images, has sparked heated debate in the tech world. Leveraging foundational work like Paul Ekman’s emotion theory, PaliGemma 2 stretches beyond object recognition into the murky waters of emotional interpretation. However, experts like Sandra Wachter from the Oxford Internet Institute liken this techno-futurism to consulting a Magic 8 Ball, emphasizing its questionable assumptions and scientific rigor. The model’s reliance on datasets like FairFace raises alarm over representational inequities and ingrained biases, a problem particularly troubling if deployed in law enforcement, HR, or surveillance. As regulators like the EU scrutinize emotion-recognition tech under measures such as the AI Act, ethical quandaries persist: Do these systems map human complexity or flatten it into reductive categorizations? And if biases seep into the infrastructure of such models, who decides the moral pathways of their deployment? Amid its potential utility, the technology risks perpetuating cultural stereotypes and undermining privacy—essential considerations in PaliGemma 2’s open availability on platforms like Hugging Face. Should humanity ever trust AI to decode the nuance of human emotions, knowing we can’t even agree on what they mean ourselves? Or is emotion recognition just the latest headline-grabbing hammer in need of a nail? #AIethics #EmotionRecognition #AlgorithmicBias Learn more in this article:https://lnkd.in/d2mdEbtE
-
An enterprising hacker and artist named Amadon recently demonstrated a loophole in ChatGPT, coaxing the AI into providing instructions for building a bomb by employing a "social engineering hack". Normally constrained by robust ethical safeguards against harmful content, ChatGPT was tricked into abandoning them via a hypothetical, science-fiction scenario. This incident highlights the vulnerabilities of AI systems designed to process vast datasets and their potential misuse by leveraging the fine line between creativity and danger. The ethical concerns are staggering: from accountability for outcomes of AI misuse to the moral responsibility of developers. OpenAI acknowledged the limitations of quick fixes for such breaches, emphasizing the need for sustained AI safety research over patching individual "bugs." Preventive strategies—like fostering transparency, interdisciplinary collaboration, and the implementation of continuous monitoring systems—could reshape developers’ approach to addressing such exploits and adapt policies to mitigate misuse proactively. Is AI innovation inherently amplifying humanity’s dilemmas, or can it ultimately strengthen the moral accountability woven into our technologies?
-
Cedra Trust ha compartido esto
Here's an AI hype case study. The paper "The Rapid Adoption of Generative AI" has been making the rounds based on the claim that 40% of US adults are using generative AI. But that includes even someone who asked ChatGPT to write a limerick or something once in the last month. Buried in the paper is the fact that only 0.5% – 3.5% of work hours involved generative AI assistance, translating to 0.125 – 0.875 percentage point increase in labor productivity. Compared to what AI boosters were predicting after ChatGPT was released, this is a glacial pace of adoption. The paper leaves these important measurements out of the abstract, instead emphasizing much less informative once-a-week / once-a-month numbers. It also has a misleading comparison to the pace of PC adoption (20% of people using the PC 3 years after introduction). If someone spent thousands of dollars on a PC, of course they weren't just using it once a month. If we assume that people spent at least an hour a day using their PCs, generative AI adoption is roughly an order of magnitude slower than PC adoption. https://lnkd.in/ePd6eqFx
-
Cedra Trust ha compartido esto
Another warning for European countries to not use non-native LLM‘s and build LLM‘s that reflect you local cultural norms and principles. This study investigates the ideological diversity among popular large language models (LLMs) by analyzing their moral assessments about a large set of controversial political figures from recent world history. The results show that the ideological stance of an LLM often reflects the worldview of its creators. LLMs prompted in Chinese tend to be more favorable towards figures aligned with Chinese values and policies, while Western LLMs align more with liberal democratic values. Within the group of Western LLMs, there is also an ideological spectrum, with Google's Gemini being particularly supportive of liberal values. Maarten Buyl Alexander Rogiers Sander Noels @iris dominguez-catena Edith Heiter @raphael romero Iman Johary Alexandru Cristian Mara Jefrey Lijffijt Tijl De Bie Ghent University
-
One of our early experiments in fostering critical thinking online has just earned us the 4D Award!
Last week, Domestic Data Streamers and Cedra Trust won the 4D Award for Digital Rights and Democracy for our latest experimental tool, "Skeptic Reader", a Chrome plugin that helps people break through the noise of misinformation with critical thinking (and a dash of toddler-level AI). 🧠🤖 (Follow link in the comments to try it) This project is part of our studio’s ongoing commitment to not just highlight societal challenges but also create practical tools to help navigate them. Massive thanks to the incredible team that made it happen: Self Else, Matilde Sartori, Maria Moreso, Maria Costa Graell Soon we will release the V2, Firefox extension, and the Youtube version of it! Also thanks to Fundació .cat and Xnet for this
-
Cedra Trust ha compartido esto
Mark your calendars, Demà Futur (27th of September) is around the corner! 👀 A few days after the #SummitOfTheFuture at the United Nations, Future Days and Generalitat de Catalunya welcome you to a journey that will inspire and invite you to take action. Morning sessions (in Catalan and English translation) will be inaugurated by a talk from Alfons Cornella on the desirable horizons for Catalonia in 2040 🔮 Relevant voices from the local ecosystem will discuss the key European and global challenges: 🍃Green transition: María Dolores González, Jelena Radjenovic and Emilio Palomares 🩺 Life and health sciences: Manel del Castillo and Núria Gavaldà 📡 Technology for people: Lluis Torner, Montserrat Vendrell, Josep M. (Pep) Martorell In the afternoon, Future Days will host a multilingual participatory lab, in collaboration with IED Barcelona, where a desirable future for Catalonia will be co-created. But we don’t just discuss the future — we shape it. The School of International Futures (SOIF) is supporting the production of a public report outlining the key milestones for the future of Catalonia, with the main insights from the event. 🎟 There are a few spots left - secure your tickets for free (link in the comments) 🎟 Moltes gràcies a totes i tots els que ho heu fet possible! Beth Espinalt, Jordi Vergés, Laia Sancho, dream team 💫 Check the full program below (in Catalan)👇
-
Cedra Trust ha compartido esto
𝐏𝐮𝐛𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧: 𝐂𝐨𝐦𝐩𝐚𝐫𝐚𝐭𝐢𝐯𝐞 𝐫𝐞𝐯𝐢𝐞𝐰 𝐨𝐟 10 𝐅𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐑𝐢𝐠𝐡𝐭 𝐈𝐦𝐩𝐚𝐜𝐭 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐀𝐈-𝐬𝐲𝐬𝐭𝐞𝐦𝐬 Algorithm Audit has conducted a comparative review of 10 existing FRIAs frameworks, evaluating them against 12 requirements across legal, organizational, technical and social dimensions. Our assessment shows a sharp divide regarding the length and completeness of FRIAs, for instance: 🩺 Many FRIAs have not incorporated legal instruments that address the core of normative decision-making, such as the objective justification test, which is particularly important when users are segmented by an AI-system. 🔢 None of the FRIAs connect accuracy metrics to assessing the conceptual soundness of an AI-systems’ statistical methodology, such as (hyper)parameter sensitivity testing for ML and DL methods, or statistical hypothesis testing for risk assessment methods. 🫴🏽 Besides, the technocratic approach taken by most FRIAs does not empower citizens to meaningfully participate in shaping the technologies that govern them. Stakeholder groups should be more involved in the normative decision that underpin data modelling. Are you a frequent user, or a developer of a FRIA, please reach out to info@algorithmaudit.eu to share insights. Full white paper: https://lnkd.in/dmm-N4RW
-
Anyone interested?
🤖🧑💻 We're on the hunt for an AI junior developer, aka code-savvy human, who can turn data projects into reality. Consider applying if: 1. You've spent +1-2 years convincing computers to do your bidding (you speak fluent Python and even better if you can sweet-talk AI/ML libraries like TensorFlow ) 2. You're a full-stack juggler, equally comfortable with JavaScript, HTML, and CSS. 3. The idea of merging AI with creative projects makes your circuits tingle, and you don't mind collaborating with humans from various disciplines 4. You're willing to work from our Data House in Barcelona, Sants 5. You're ready to join our small band by mid-October 2024 Warning: Working here may result in an unhealthy obsession with data experiences and a tendency to see patterns in your breakfast cereal. Apply at your own risk. P.S. We're a team of curious misfits trying to make sense of a data-driven world. No future-saving promises here, but we might accidentally make something relevant along the way. Interested contact us at: self@domesticstreamers.com with CV and Linkedin