Cedra Trust

Cedra Trust

Tecnología, información e internet

Barcelona, Barcelona 112 seguidores

Exploring GEN-AI ethical pathways with you.

Sobre nosotros

Cedra Trust is a company fostering an ecosystem for researchers, institutions, communities, and governments to develop safe practices around GEN-AI integration and quantifing its real impact.

Sitio web
https://www.cedra.ai/
Sector
Tecnología, información e internet
Tamaño de la empresa
De 11 a 50 empleados
Sede
Barcelona, Barcelona
Tipo
Asociación
Fundación
2022
Especialidades
Ethical Frameworks, Capacity building, Impact assessments y GEN-AI

Ubicaciones

  • Principal

    C/Papin n33

    08028

    Barcelona, Barcelona 08028, ES

    Cómo llegar

Empleados en Cedra Trust

Actualizaciones

  • Cedra Trust ha compartido esto

    Ver el perfil de Arvind Narayanan, gráfico

    Professor at Princeton University

    Here's an AI hype case study. The paper "The Rapid Adoption of Generative AI" has been making the rounds based on the claim that 40% of US adults are using generative AI. But that includes even someone who asked ChatGPT to write a limerick or something once in the last month. Buried in the paper is the fact that only 0.5% – 3.5% of work hours involved generative AI assistance, translating to 0.125 – 0.875 percentage point increase in labor productivity. Compared to what AI boosters were predicting after ChatGPT was released, this is a glacial pace of adoption. The paper leaves these important measurements out of the abstract, instead emphasizing much less informative once-a-week / once-a-month numbers. It also has a misleading comparison to the pace of PC adoption (20% of people using the PC 3 years after introduction). If someone spent thousands of dollars on a PC, of course they weren't just using it once a month. If we assume that people spent at least an hour a day using their PCs, generative AI adoption is roughly an order of magnitude slower than PC adoption. https://lnkd.in/ePd6eqFx

    • No hay descripción de texto alternativo para esta imagen
  • Cedra Trust ha compartido esto

    Ver el perfil de Bart De Witte, gráfico

    Founder in Stealth | Founder HIPPO AI Foundation | Keynote Speaker, Lecturer | LinkedIn Top Healthcare Voice | Digital Health | Medical AI | Open Source

    Another warning for European countries to not use non-native LLM‘s and build LLM‘s that reflect you local cultural norms and principles. This study investigates the ideological diversity among popular large language models (LLMs) by analyzing their moral assessments about a large set of controversial political figures from recent world history. The results show that the ideological stance of an LLM often reflects the worldview of its creators. LLMs prompted in Chinese tend to be more favorable towards figures aligned with Chinese values and policies, while Western LLMs align more with liberal democratic values. Within the group of Western LLMs, there is also an ideological spectrum, with Google's Gemini being particularly supportive of liberal values. Maarten Buyl Alexander Rogiers Sander Noels @iris dominguez-catena Edith Heiter @raphael romero Iman Johary Alexandru Cristian Mara Jefrey Lijffijt Tijl De Bie Ghent University

    • No hay descripción de texto alternativo para esta imagen
  • One of our early experiments in fostering critical thinking online has just earned us the 4D Award!

    Ver el perfil de Pau Aleikum Garcia, gráfico

    Founder at Domestic Data Streamers | Design, code, arts and community research

    Last week, Domestic Data Streamers and Cedra Trust won the 4D Award for Digital Rights and Democracy for our latest experimental tool, "Skeptic Reader", a Chrome plugin that helps people break through the noise of misinformation with critical thinking (and a dash of toddler-level AI). 🧠🤖 (Follow link in the comments to try it) This project is part of our studio’s ongoing commitment to not just highlight societal challenges but also create practical tools to help navigate them. Massive thanks to the incredible team that made it happen: Self Else, Matilde Sartori, Maria Moreso, Maria Costa Graell Soon we will release the V2, Firefox extension, and the Youtube version of it! Also thanks to Fundació .cat and Xnet for this

    • No hay descripción de texto alternativo para esta imagen
    • No hay descripción de texto alternativo para esta imagen
    • No hay descripción de texto alternativo para esta imagen
    • No hay descripción de texto alternativo para esta imagen
  • Cedra Trust ha compartido esto

    Ver la página de empresa de Future Days, gráfico

    6841 seguidores

    Mark your calendars, Demà Futur (27th of September) is around the corner! 👀 A few days after the #SummitOfTheFuture at the United Nations, Future Days and Generalitat de Catalunya welcome you to a journey that will inspire and invite you to take action. Morning sessions (in Catalan and English translation) will be inaugurated by a talk from Alfons Cornella on the desirable horizons for Catalonia in 2040 🔮 Relevant voices from the local ecosystem will discuss the key European and global challenges: 🍃Green transition: María Dolores González, Jelena Radjenovic and Emilio Palomares 🩺 Life and health sciences: Manel del Castillo and Núria Gavaldà 📡 Technology for people: Lluis Torner, Montserrat Vendrell, Josep M. (Pep) Martorell In the afternoon, Future Days will host a multilingual participatory lab, in collaboration with IED Barcelona, where a desirable future for Catalonia will be co-created. But we don’t just discuss the future — we shape it. The School of International Futures (SOIF) is supporting the production of a public report outlining the key milestones for the future of Catalonia, with the main insights from the event. 🎟 There are a few spots left - secure your tickets for free (link in the comments) 🎟 Moltes gràcies a totes i tots els que ho heu fet possible! Beth Espinalt, Jordi Vergés, Laia Sancho, dream team 💫 Check the full program below (in Catalan)👇

  • Cedra Trust ha compartido esto

    𝐏𝐮𝐛𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧: 𝐂𝐨𝐦𝐩𝐚𝐫𝐚𝐭𝐢𝐯𝐞 𝐫𝐞𝐯𝐢𝐞𝐰 𝐨𝐟 10 𝐅𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥 𝐑𝐢𝐠𝐡𝐭 𝐈𝐦𝐩𝐚𝐜𝐭 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐀𝐈-𝐬𝐲𝐬𝐭𝐞𝐦𝐬 Algorithm Audit has conducted a comparative review of 10 existing FRIAs frameworks, evaluating them against 12 requirements across legal, organizational, technical and social dimensions. Our assessment shows a sharp divide regarding the length and completeness of FRIAs, for instance: 🩺 Many FRIAs have not incorporated legal instruments that address the core of normative decision-making, such as the objective justification test, which is particularly important when users are segmented by an AI-system. 🔢 None of the FRIAs connect accuracy metrics to assessing the conceptual soundness of an AI-systems’ statistical methodology, such as (hyper)parameter sensitivity testing for ML and DL methods, or statistical hypothesis testing for risk assessment methods. 🫴🏽 Besides, the technocratic approach taken by most FRIAs does not empower citizens to meaningfully participate in shaping the technologies that govern them. Stakeholder groups should be more involved in the normative decision that underpin data modelling. Are you a frequent user, or a developer of a FRIA, please reach out to info@algorithmaudit.eu to share insights. Full white paper: https://lnkd.in/dmm-N4RW

    • No hay descripción de texto alternativo para esta imagen
  • Anyone interested?

    Ver el perfil de Pau Aleikum Garcia, gráfico

    Founder at Domestic Data Streamers | Design, code, arts and community research

    🤖🧑💻 We're on the hunt for an AI junior developer, aka code-savvy human, who can turn data projects into reality. Consider applying if: 1. You've spent +1-2 years convincing computers to do your bidding (you speak fluent Python and even better if you can sweet-talk AI/ML libraries like TensorFlow ) 2. You're a full-stack juggler, equally comfortable with JavaScript, HTML, and CSS. 3. The idea of merging AI with creative projects makes your circuits tingle, and you don't mind collaborating with humans from various disciplines 4. You're willing to work from our Data House in Barcelona, Sants  5. You're ready to join our small band by mid-October 2024 Warning: Working here may result in an unhealthy obsession with data experiences and a tendency to see patterns in your breakfast cereal. Apply at your own risk. P.S. We're a team of curious misfits trying to make sense of a data-driven world. No future-saving promises here, but we might accidentally make something relevant along the way. Interested contact us at: self@domesticstreamers.com with CV and Linkedin

    • No hay descripción de texto alternativo para esta imagen
  • With OpenAI's release of Strawberry (α1), we're seeing a shift from large pre-training models to scaling inference time compute — a paradigm shift that could reshape how we approach reasoning in AI. As Jim Fan highlights, the focus is now on smarter inference, optimizing tools like browsers and code verifiers, rather than just increasing model size. While this is a major technical leap, it raises important ethical questions: - How do we ensure fairness and transparency in the decision-making processes during inference? - Will this new approach increase or reduce biases within AI systems? - What are the implications for privacy and data security as models continuously learn from inference feedback? #AI #GenerativeAI #ResponsibleAI #Ethics #Compute #Inference #CedraAI

    Ver el perfil de Jim Fan, gráfico
    Jim Fan Jim Fan es una persona influyente

    NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Group). Stanford Ph.D. Building Humanoid robot and gaming foundation models. OpenAI's first intern. Sharing insights on the bleeding edge of AI.

    OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there're only 2 techniques that scale indefinitely with compute: learning & search. It's time to shift focus to the latter. 1. You don't need a huge model to perform reasoning. Lots of parameters are dedicated to memorizing facts, in order to perform well in benchmarks like trivia QA. It is possible to factor out reasoning from knowledge, i.e. a small "reasoning core" that knows how to call tools like browser and code verifier. Pre-training compute may be decreased. 2. A huge amount of compute is shifted to serving inference instead of pre/post-training. LLMs are text-based simulators. By rolling out many possible strategies and scenarios in the simulator, the model will eventually converge to good solutions. The process is a well-studied problem like AlphaGo's monte carlo tree search (MCTS). 3. OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. Two papers came out on Arxiv a week apart last month: - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling. Brown et al. finds that DeepSeek-Coder increases from 15.9% with one sample to 56% with 250 samples on SWE-Bench, beating Sonnet-3.5. - Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters. Snell et al. finds that PaLM 2-S beats a 14x larger model on MATH with test-time search. 4. Productionizing o1 is much harder than nailing the academic benchmarks. For reasoning problems in the wild, how to decide when to stop searching? What's the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn't share much. 5. Strawberry easily becomes a data flywheel. If the answer is correct, the entire search trace becomes a mini dataset of training examples, which contain both positive and negative rewards. This in turn improves the reasoning core for future versions of GPT, similar to how AlphaGo’s value network — used to evaluate quality of each board position — improves as MCTS generates more and more refined training data.

    • No hay descripción de texto alternativo para esta imagen
  • Cedra Trust ha compartido esto

    Ver el perfil de Wojtek B., gráfico

    AI FinTech startup founder | AI use cases and regulation expert | PhD candidate in AI at the University of Cambridge.

    My friend’s job consists of talking to prospects and clients; all day, every day. It’s either in-person meetings or calls. She takes brief notes during the conversation, then she copies them into a gen AI system asking it to rewrite them as full, professionally-worded sentences. Gen AI does its job well on this task. My rough estimate is that gen AI saves my friend some 30mins of work every day. Given that a standard working day in the UK is 7 hours, it’s a 7.14% improvement in efficiency; not bad at all. However – even though sales is a very common job category – a call note summarizer doesn’t exactly sound like an economic game-changer on a planetary scale, which is what McKinsey has been promising us for the past 1.5 years with zeal that is second to none. From the Jun-2023 McKinsey “The economic potential of generative AI” report: “Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed”. Within jobs and professions I have worked at to date (compliance advisory, product management, management consulting) the scope for gen AI’s value-add is… limited to none. But what about legal? Coding? Creative? I combined my own professional experiences with some excellent, topical research and practitioners' observations to arrive at some realistic predictions and conclusions - check 'em out. #AI #genAI #McKinsey

    Generative AI productivity gains in coding and other professions

    Generative AI productivity gains in coding and other professions

    Wojtek B. en LinkedIn

  • Here a quick summary of the main publications of UNESCO on AI in Education: 1. Guidance for generative AI in education and research (Arabic, English, French, Greek, Malay, Portuguese, Russian, Spanish, Turkish) https://lnkd.in/eqdJh2yR 2. AI and education: guidance for policy-makers (Arabic, Chinese, English, French, Korean, Spanish, Russian) https://lnkd.in/gVjWctD 3. K-12 AI curricula: a mapping of government-endorsed AI curricula (Arabic, Chinese, English, French, Portuguese, Spanish, Russian) https://lnkd.in/gEDPiRr8 4. Beijing Consensus on AI and Education (Arabic, Chinese, English, French, Spanish, Russian) https://lnkd.in/g8JGMyF 5. AI in the UAE’s computing, creative design and innovation K-12 curriculum: a case study (Arabic, English) https://lnkd.in/eiBiCfAc 6. AI and inclusion: compendium of promising initiatives 2020 https://lnkd.in/e2F4cP5 7. AI in Education: compendium of Promising Initiatives 2021 https://lnkd.in/ebG3pFy 8. International Conference on AI and Education (2019) Report https://lnkd.in/gnU8VXT 9. International Forum on AI and Education (2020) Report https://lnkd.in/dAJxpUH 10. International Forum on AI and Education (2021) Synthesis Report https://lnkd.in/eTW2Mna8 11. International Forum on AI and Education (2022) Analytical Report https://lnkd.in/eCeC6jRx hashtag #AI hashtag #education hashtag #aied hashtag #aieducation hashtag #generativeai hashtag #policy hashtag #aistrategy hashtag #aiethics hashtag #aiskills hashtag #aiskill hashtag #aicompetency hashtag #ailiteracy hashtag #teachers hashtag #teacher hashtag #students

    • No hay descripción de texto alternativo para esta imagen
  • Cedra Trust ha compartido esto

    Ver el perfil de Peter Slattery, PhD, gráfico
    Peter Slattery, PhD Peter Slattery, PhD es una persona influyente

    Lead at the MIT AI Risk Repository | MIT FutureTech

    This is a research briefing on artificial intelligence (AI) published by the UK House of Commons Library on August 20, 2024. It covers: - AI and key terms - UK Government Policy and Regulation - The Use of AI in Different Sectors - AI Safety and Ethics It also provides an extensive set of links to other content. It is well worth reading. Follow MIT FutureTech if you are interested in technological trends and their social implications. #ArtificialIntelligence #Technology #Economics

Páginas similares