leiwand.ai

leiwand.ai

Unternehmensberatung

Vienna, Vienna 481 Follower:innen

Wir machen KI qualitativ hochwertig und fair

Info

Das Ziel von leiwand.ai ist es, Organisationen bei der Entwicklung vertrauenswürdiger künstlicher Intelligenz zu unterstützen - KI, die hält, was sie verspricht, und zwar auf faire Weise. Wir glauben, dass digitale Technologien neugestaltet werden müssen, um ihre Qualität zu verbessern, ihre Auswirkungen zielgerechter zu machen und das Vertrauen von Bürgern*innen und Kund*innen zu gewinnen. Vertrauenswürdige KI ist nicht einfach ein Etikett, das man einem Produkt im Nachhinein anheften kann: Sie ist eine Designentscheidung und muss während des gesamten Lebenszyklus eines KI-Systems verfolgt werden. leiwand.ai ist hier, um diesen gesamten Prozess zu unterstützen. Um das zu erreichen, schlagen wir einen neuartigen, ganzheitlichen KI-Entwicklungspfad ein: Von der Entwicklung eines KI-Systems bis zu seiner Außerbetriebnahme beziehen wir die gesellschaftlichen, menschlichen und planetarischen Bedürfnisse in die Gleichung mit ein. leiwand.ai entwickelt Strategien, um die positiven Auswirkungen zu maximieren und die Risiken während des gesamten Lebenszyklus eines KI-Systems zu minimieren. Wir bringen nicht nur KI-Expertise mit, sondern wenden auch Open-Innovation-Methoden an und ziehen unser Wissen aus verschiedenen Disziplinen wie Sozial- und Rechtswissenschaften in den gesamten KI-Entwicklungsprozess ein. Unsere interdisziplinären Methoden helfen Organisationen dabei, zuverlässige KI-Lösungen zu entwickeln und einzusetzen, die den Menschen, der Gesellschaft und dem Planeten zugute kommen und zu sozialer Gerechtigkeit und Nachhaltigkeit beitragen.

Website
https://www.leiwand.ai
Branche
Unternehmensberatung
Größe
2–10 Beschäftigte
Hauptsitz
Vienna, Vienna
Art
Kapitalgesellschaft (AG, GmbH, UG etc.)
Gegründet
2022
Spezialgebiete
Artificial Intelligence, Open Innovation, Trustworthy AI, data4good, AI Standards, Algorithmic Fairness, AI Transparency und Künstliche Intelligenz

Orte

Beschäftigte von leiwand.ai

Updates

  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    ❗𝐈𝐭’𝐬 𝐨𝐟𝐟𝐢𝐜𝐢𝐚𝐥❗ 𝐖𝐞 𝐰𝐢𝐥𝐥 𝐜𝐫𝐞𝐚𝐭𝐞 𝐨𝐮𝐫 𝐯𝐞𝐫𝐲 𝐨𝐰𝐧 𝐢𝐧-𝐡𝐨𝐮𝐬𝐞 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐟𝐨𝐫 𝐩𝐫𝐞-𝐚𝐬𝐬𝐞𝐬𝐬𝐢𝐧𝐠 𝐛𝐢𝐚𝐬 𝐫𝐢𝐬𝐤𝐬 𝐢𝐧 𝐚𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 𝐬𝐲𝐬𝐭𝐞𝐦𝐬 – 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐨𝐟 𝐢𝐭𝐬 𝐤𝐢𝐧𝐝 🚀 We are thrilled to announce the next step in our endeavour to make #AI systems more trustworthy for all. Funded by FFG Österreichische Forschungsförderungsgesellschaft mbH and in collaboration with the Technische Universität Wien, we are working on creating “𝐓𝐡𝐞 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦𝐢𝐜 𝐁𝐢𝐚𝐬 𝐑𝐢𝐬𝐤 𝐑𝐚𝐝𝐚𝐫” (ABRRA), pursuing a highly innovative objective in the rapidly growing AI trust, risk and security management (AI TRiSM) market. 🛡 With ABRRA, we are developing technology for pre-assessing discrimination and bias risks in #ArtificialIntelligence applications, based on carefully curated expert database that will be filled with thousands of AI Incidents. 🖥 Various #machinelearning and statistical techniques will then be used to gain insights from these case studies. The project involves close collaboration between social scientists, data scientists, and machine learning experts to address #bias and #discrimination in various fields. 🤖 We develop AI to test AI 🤖 𝐖𝐡𝐚𝐭 𝐰𝐢𝐥𝐥 𝐛𝐞 𝐭𝐡𝐞 𝐯𝐚𝐥𝐮𝐞 𝐨𝐟 𝐀𝐁𝐑𝐑𝐀? The Algorithmic Bias Risk Radar aims to identify adverse effects of AI systems early in their development, procurement, and certification process. The technology will facilitate targeted Risk Assessments and Fundamental Rights Impact Assessments, as required by the new EU AI Act for high-risk applications, such as those encountered in fields like #HR, #health, #finance, #education, and #publicadministration.  👫 👩🏫 🏛 💉 Identifying risks for harmful bias is an essential first step towards building safer, more trustworthy #AIsystems that can be of benefit to all. 🌍 And we're incredibly proud to make a contribution to this effort. 💪 We would like to thank: Our project partners Sabine T. Köszegi and Satyam Subhash Alexandra Ciarnau and DORDA Rechtsanwälte GmbH, Saniye Gülser Corat, Thomas Doms, Matthias Spielkamp and AlgorithmWatch, Adam Leon Smith FBCS, Michael Hödlmoser, Michael Heinisch, Günter Griesmayr for their support! #trustworthyAI #fairAI

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐖𝐞 𝐚𝐫𝐞 𝐮𝐭𝐢𝐥𝐢𝐳𝐢𝐧𝐠 𝐨𝐮𝐫 𝐞𝐱𝐩𝐞𝐫𝐭𝐢𝐬𝐞 𝐭𝐨 𝐩𝐮𝐬𝐡 𝐟𝐨𝐫 #𝐠𝐫𝐞𝐞𝐧𝐀𝐈 𝐚𝐧𝐝 𝐡𝐞𝐥𝐩 #𝐒𝐌𝐄𝐬 🌿 🏢 leiwand.ai’s primary focus has always been on ensuring that #AI is trustworthy and fair — evaluating systems for #bias, promoting transparency and addressing potential discrimination in line with the #EU_AI_Act. With a new project, we are expanding our definition of #responsibleAI to incorporate sustainability, recognizing that true responsibility must also address environmental impacts. 🌍 𝐄𝐧𝐭𝐞𝐫 𝐭𝐡𝐞 𝐀𝐧𝐚𝐥𝐲𝐬𝐞𝐫 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 The Corporate Sustainability Reporting Directive (#CSRD) is rolling out across the #EU, pushing companies to disclose environmental, social, and governance (#ESG) data. 🌐 While this is a big step toward sustainable business models, many SMEs will face challenges in navigating its complexity, especially when paired with the detailed criteria of the EU taxonomy. 🤔 A dedicated consortium launched the research project 𝐀𝐈 𝐄𝐧𝐚𝐛𝐥𝐞𝐝 𝐒𝐮𝐬𝐭𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐉𝐮𝐫𝐢𝐬𝐝𝐢𝐜𝐭𝐢𝐨𝐧 𝐃𝐞𝐦𝐨𝐧𝐬𝐭𝐫𝐚𝐭𝐨𝐫 (𝐀𝐧𝐚𝐥𝐲𝐬𝐞𝐫) to tackle these challenges. The consortium is led by Fraunhofer Austria and includes leiwand.ai, Universität Innsbruck, Technische Universität Wien, PwC Österreich, ecoplus. Niederösterreichs Wirtschaftsagentur GmbH, Murexin GmbH, and Lithoz, ➡️ The goal is to 𝐦𝐚𝐤𝐞 𝐭𝐡𝐞 𝐬𝐮𝐬𝐭𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐫𝐞𝐩𝐨𝐫𝐭𝐢𝐧𝐠 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐬𝐢𝐦𝐩𝐥𝐞𝐫 and more accessible, supporting SMEs as they navigate these complex requirements. ➡️ We’ll develop AI modules to simplify the taxonomy compliance process through automation, utilizing a combination of language models and knowledge graph to sync the taxonomy requirements with the company information.  🔰 Within this project, leiwand.ai’s role is to ensure that human-AI interaction, ethical evaluation, and sustainability are integral to its development. As the tool evolves, we are committed to ensuring that these principles are applied throughout, making sure that it is not only reliable and ethical but also designed with sustainability at its core. The project started in autumn 2024, will run for three years and is funded by the FFG Österreichische Forschungsförderungsgesellschaft mbH with resources from the Bundesministerium für Klimaschutz, Umwelt, Energie, Mobilität, Innovation & Technologie. This isn’t just about simplifying processes — it’s about building an ethical, sustainable future where AI supports positive change. 🤖 ✅ Gertraud Leimueller, Rania Wazir, Mira Reisinger, Ruben Hetfleisch, Matthias Cantini, Josef Baumüller, Stefan Merl, Maximilian Nowak, Sebastian Lumetzberger, Rainer Pascher, Patrick Mitmasser, Adam Jatowt, Martin Schwentenwein #fairAI #trustworthyAI

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    🤖 𝐖𝐡𝐲 𝐀𝐈 𝐢𝐬 𝐒𝐞𝐱𝐢𝐬𝐭 𝐚𝐧𝐝 𝐖𝐡𝐚𝐭 𝐖𝐞 𝐂𝐚𝐧 𝐃𝐨 𝐀𝐛𝐨𝐮𝐭 𝐈𝐭 🫷 ⚡️ Artificial intelligence is not human. Consequently, you’d think that it just works as this “neutral entity” that simply performs its tasks – faster and more efficiently than any human, of course. ⚖️ The ideal of AI as an unbiased judge is attractive, and the vision of AI assistants helping humans act more fairly and justly is indeed compelling. 📊 But while AI excels in specialized tasks—like image and language processing—it lacks the human-level intelligence to think creatively or interpret meaning. 💾 This also means AI systems simply reproduce things from the data they’ve been fed with. Current systems often replicate our own biases, from gender stereotyping in recruitment tools to racial disparities in healthcare outcomes. 🔎 Our article dives into the ways #AI unintentionally deepens existing inequalities, the importance of addressing #AlgorithmicBias, and concrete steps needed to turn AI into a force for fairness. leiwand.ai founders Rania Wazir and Gertraud Leimueller explore the urgent role of education, policy, and ethics in shaping the next generation of #AISystems that truly serve everyone equitably. 📖 Read on to uncover what must be done to close the AI gender gap and make this tech-driven future fairer for all: https://lnkd.in/e6a68cEx (The article is in German) #AI #DiversityInTech #AlgorithmicBias #EthicsInAI #trustworthyAI #FairAI

    Warum KI sexistisch ist und was wir dagegen tun können  – Diskurs - Fachmagazin

    Warum KI sexistisch ist und was wir dagegen tun können  – Diskurs - Fachmagazin

    https://www.jugend-diskurs.at

  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐀𝐈 𝐡𝐚𝐬 𝐭𝐡𝐞 𝐩𝐨𝐰𝐞𝐫 𝐭𝐨 𝐭𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦 𝐡𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞, 𝐛𝐮𝐭 𝐢𝐭’𝐬 𝐜𝐫𝐮𝐜𝐢𝐚𝐥 𝐭𝐨 𝐝𝐞𝐯𝐞𝐥𝐨𝐩 𝐢𝐭 𝐢𝐧 𝐚 𝐰𝐚𝐲 𝐭𝐡𝐚𝐭 𝐢𝐬 𝐟𝐚𝐢𝐫, 𝐞𝐭𝐡𝐢𝐜𝐚𝐥, 𝐚𝐧𝐝 𝐬𝐚𝐟𝐞 𝐟𝐨𝐫 𝐞𝐯𝐞𝐫𝐲𝐨𝐧𝐞 👨⚕️ The healthcare system urgently needs #innovations to be able to cope with the major demographic gap that sees the ageing of society and an ever-increasing demand for services, while fewer and fewer staff, nursing professionals and doctors work in clinics and care homes. 👥 At the 8th Regulatory Affairs Conference for Medical Devices & In-vitro Diagnostics in Vienna, our CEO Gertraud Leimueller participated in a panel discussion on the supporting role of artificial intelligence in the healthcare sector, moderated by Cornelia Ertl. Gertraud was joined by Albert Frömel, Doriana Cobarzan, Ralf Gansel, and Kerstin Waxnegger. 𝐊𝐞𝐲 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲𝐬: 🏥 AI's Role in Healthcare: AI is crucial in addressing the demographic gap by supporting healthcare staff and patients. It assists with documentation, such as recording conversations and tracking illnesses, and helps with diagnostics, like using image recognition for mammograms. This reduces the workload for doctors and nurses, allowing them to focus more on patient care. 𝐁𝐮𝐭… ✅ Algorithmic Fairness: One of the biggest requirements and challenges is ensuring fairness in AI systems used in the field, particularly in medical systems, where biases have shown to negatively impact women, non-white individuals, and underprivileged groups. This means that medical AI systems produce errors for them and, for example, do not diagnose diseases correctly. 📖 Dealing with the Source Material: Bias in healthcare AI systems stems from the bias that is already deeply embedded in the analogue, ‘humane’ healthcare system. Much of the data used in studies, for example, traditionally comes from white males. There are large data gaps for all groups outside this group. This affects AI systems that are trained with this data – unless countermeasures are taken. 🤖 Trustworthy AI Development: To ensure AI works fairly across populations, it must be developed with high standards of quality and trustworthiness from the “get go”. To achieve this, quality standards, e.g. seven dimensions of trustworthiness, must be incorporated into AI development and developed in an interdisciplinary way from the outset. 🔹 EU AI Act and Medical Devices: Starting in mid-2027, the #EUAIAct will regulate AI in medical devices (e.g. health apps or blood pressure monitors), ensuring that systems like health apps and diagnostic tools meet the necessary quality and safety standards. The conference was a blast, with around 400 attendees from start-ups and other healthcare companies, testing centres and funding agencies. We would like to thank LISAvienna, in cooperation with en.co.tec - Medizinprodukte-Consulting & Akademie, to hosting this important event! Fotocredit: © LISAvienna/Fotostudio Mank

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    📢 “𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐭𝐨𝐧𝐞 𝐰𝐞 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐬𝐞𝐭 𝐢𝐧 𝐀𝐈 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭?” 🤖   When talking about the development of #artificialintelligence in Europe, we at leiwand.ai are often confronted with this general perception that 𝐄𝐮𝐫𝐨𝐩𝐞 𝐢𝐬 𝐥𝐚𝐠𝐠𝐢𝐧𝐠 𝐛𝐞𝐡𝐢𝐧𝐝 𝐢𝐧 𝐜𝐨𝐦𝐩𝐚𝐫𝐢𝐬𝐨𝐧 𝐭𝐨 𝐭𝐡𝐞 𝐔.𝐒. 𝐚𝐧𝐝 𝐨𝐭𝐡𝐞𝐫 𝐜𝐨𝐮𝐧𝐭𝐫𝐢𝐞𝐬. Apparently, misguided investments and lacking legal latitudes to navigate the dense regulatory environment in the #EU are “thwarting” AI #innovation. ⛔ But is setting standards for quality and #trustworthyAI setting us back, or is it setting the stage for truly positive and impactful AI systems? To discuss this apparent dichotomy, leiwand.ai social scientist Janine Vallaster went to the Expedition 3.0 AI event hosted by the Vienna Airport Conference & Innovation Center. 𝐓𝐡𝐞 𝐨𝐛𝐣𝐞𝐜𝐭𝐢𝐯𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐞𝐯𝐞𝐧𝐭: ➡️ Make AI applications visible and accessible  ➡️ Debunk myths and promote the exchange of experiences  ➡️ Share best practice examples & strengthen networks In a discussion panel featuring Janine, Jeannette Gorzala, Dr. Valerie Höllinger MBL, MBA, Barbara Streimelweger, and moderated by Martin 👥 Giesswein, Janine posed the question: “𝘐𝘧 𝘸𝘦 𝘢𝘳𝘦 𝘭𝘰𝘰𝘬𝘪𝘯𝘨 𝘵𝘰 𝘵𝘩𝘦 𝘜.𝘚. 𝘢𝘴 𝘢 𝘱𝘪𝘰𝘯𝘦𝘦𝘳 𝘢𝘯𝘥 𝘳𝘰𝘭𝘦 𝘮𝘰𝘥𝘦𝘭, 𝘸𝘦 𝘴𝘩𝘰𝘶𝘭𝘥 𝘧𝘪𝘳𝘴𝘵 𝘢𝘴𝘬 𝘰𝘶𝘳𝘴𝘦𝘭𝘷𝘦𝘴 𝘸𝘩𝘢𝘵 𝘪𝘵 𝘪𝘴 𝘸𝘦𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 𝘸𝘢𝘯𝘵 𝘵𝘰 𝘢𝘤𝘩𝘪𝘦𝘷𝘦; 𝘵𝘰 𝘣𝘦 𝘩𝘪𝘨𝘩𝘦𝘳, 𝘧𝘢𝘴𝘵𝘦𝘳, 𝘣𝘦𝘵𝘵𝘦𝘳, 𝘣𝘶𝘵 𝘪𝘯 𝘸𝘩𝘢𝘵 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺? 𝘈𝘯𝘥 𝘧𝘰𝘳 𝘸𝘩𝘰𝘮?” 🤔 The discussion concluded that actually, we are not behind at all when leaning on quality before acceleration, collaboration, and value orientation. By following this approach, we can develop AI that is fair for all. 🦾 But we can’t make #AI fair on our own. AI can help us combat issues like #bias in AI systems, which we are aiming to achieve with our Algorithmic Bias Risk Radar that will simplify risk assessment. https://lnkd.in/eHykTmEX The next big milestone will be coming February, when the #AIAct is taking effect. Austria - as part of the EU landscape - plays a global pioneering role in shaping fair and ethical AI standards. Jeannette Gorzala, who worked on the AI Act, as well as Peter Biegelbauer called on companies to 𝐧𝐨𝐭 𝐰𝐚𝐢𝐭 𝐚𝐧𝐲 𝐥𝐨𝐧𝐠𝐞𝐫 𝐚𝐧𝐝 𝐭𝐨 𝐟𝐚𝐦𝐢𝐥𝐢𝐚𝐫𝐢𝐳𝐞 𝐭𝐡𝐞𝐦𝐬𝐞𝐥𝐯𝐞𝐬 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐨𝐟 𝐭𝐡𝐞 𝐀𝐈 𝐀𝐜𝐭 𝐧𝐨𝐰. 📖 Collaboration, value orientation, and fair AI are crucial as we lead the way globally. 🌍 How do you think AI should be shaped in the EU? Are regulations helpful, or do they stop progress? Let us know! 🗣️ Fotocredit: Akos Burg

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐀𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥 𝐈𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞: 𝐀 𝐛𝐚𝐥𝐚𝐧𝐜𝐢𝐧𝐠 𝐚𝐜𝐭 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐢𝐞𝐬 𝐚𝐧𝐝 𝐞𝐭𝐡𝐢𝐜𝐚𝐥 𝐚𝐧𝐝 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐜𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 leiwand.ai CEO Gertraud Leimueller will be at the 8th Regulatory Affairs Conference for Medical Devices & In-vitro Diagnostics in Vienna! Look forward to a day full of exciting lectures, discussions and practical insights on the topics of market approval, #MDR, #IVDR and other important regulations. The highlight of this year's conference: artificial intelligence (#AI). Gertraud will participate in a panel discussion, which will shed light on the #EUAIAct and provide valuable insights into #cybersecurity, #reimbursement and more! 💡 LISAvienna, in cooperation with en.co.tec - Medizinprodukte-Consulting & Akademie, cordially invites all interested parties from the DACH region to attend this conference. 📅 Date: 17.10.24 📍 Location: Schloss Schönbrunn, Vienna 📓 Programme: https://lnkd.in/dYa_BCet 🗣️ Language used: German Join us and stay informed! 🎯 LISA - Life Science Austria, Austria Wirtschaftsservice (aws) #REG24 #Innovation #trustworthyAI #fairAI

    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    🚀𝐇𝐨𝐰 𝐖𝐨𝐦𝐞𝐧 𝐒𝐡𝐚𝐩𝐞 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞🚀 Innovation is changing our world, and women play a big part in it! That is why bringing together women innovators is crucial in accelerating innovation. 🤝 leiwand.ai CEO Gertraud Leimueller was invited to join the #SHEInnovates2024 meets #MINTchanger event, connecting with other trailblazing women who came together in sharing insights and stories to inspire those that seek to become driving forces for new technologies and solutions. Among many diverse topics, AI became part of a panel discussion. 𝐒𝐨𝐦𝐞 𝐤𝐞𝐲 𝐭𝐚𝐤𝐞𝐚𝐰𝐚𝐲𝐬: 🙍♀️ Women are directly and indirectly affected by algorithmic bias. 𝐃𝐢𝐫𝐞𝐜𝐭𝐥𝐲: AI health applications often work worse for women, for example through discriminatory job offers by AI recrutiment tools. 𝐈𝐧𝐝𝐢𝐫𝐞𝐜𝐭𝐥𝐲: Generative AI devalues women professionally and uses old stereotypes. 🤖 This is precisely why it is important that women engage with AI and get involved in its development and application. 🦾 Women can benefit greatly from AI applications because they can increase labor productivity - “less work, more output”. 🙅♀️ Nevertheless, there is a certain reluctance: Women use AI to a lesser extent than men do. This also has to do with a guilty conscience, with some reporting that they are taking an unjustified shortcut when using AI applications. 👩💼 The AI sector is open to female career changers and offers excellent job opportunities, something women should dare to take advantage of. 𝐖𝐡𝐚𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐚𝐛𝐨𝐮𝐭 𝐭𝐡𝐞 𝐟𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐰𝐨𝐦𝐞𝐧 𝐢𝐧 𝐀𝐈? 𝐋𝐞𝐭 𝐮𝐬 𝐤𝐧𝐨𝐰! The event was hosted by A1 Telekom Austria AG in collaboration with SHEconomy - Die Wirtschaftsplattform für Frauen. 365 Tage im Jahr., and Wirtschaftsagentur Wien. We would like to thank everyone involved in making this absolutely stacked event possible! Karin Mairhofer Martina Köberl-Huber Kristina Maria Brandstetter Christoph Moser Christine Wahlmüller-Schiller Doris Lippert Ulrike Farnik Katja Fröhlich Elisa Liekkilä Daniela Fritz Jennifer Isabella Schimanko Sandra Golser Barbara Seelos Ana📚 Simic Simone Scholz Florentina Zach Isabella Gruber, MA Setareh Zafari Martina Maurer   #innovation #trustworthyAI #fairAI #AIbias #artificialintelligence #womeninnovators Fotocredit: Philipp Lipiarski

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • leiwand.ai hat dies direkt geteilt

    Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐄𝐧 𝐑𝐨𝐮𝐭𝐞 𝐭𝐨 𝐌𝐚𝐤𝐢𝐧𝐠 𝐀𝐈 𝐒𝐮𝐬𝐭𝐚𝐢𝐧𝐚𝐛𝐥𝐞 - 𝐓𝐨𝐠𝐞𝐭𝐡𝐞𝐫 The team at leiwand.ai had a blast at the DSC DACH conference in Vienna, which is dedicated to fostering collaboration between data & AI professionals. 👩💻 👨💻 You can find the full report here: https://lnkd.in/etw-ekvP Our CTO Rania Wazir held a keynote on transparency as catalyst for trustworthy and sustainable AI, asking the hard-hitting questions that AI developers, deployers, and society as a whole need to answer with regard to the use of AI. She explored what transparency in AI truly entails — whether it's about documentation, access, or something more — and how it connects to factors like fairness, privacy, and cybersecurity. Rania also questioned whether transparency should be a fundamental right or a privilege, stressing that genuine transparency requires courage but leads to better, more ethical AI decisions. Dr. Julia Zukrigl shared how to convince stakeholders of sustainable data projects by creating human centred emotional visions that show clear benefits and mitigate our fight & flight mechanisms. Magdalena Hutze explained how VERBUND uses AI for energy management, from automating contracts to monitoring fish in hydropower plants, AI systems are not just a fancy add-on, but an essential part of the energy sector. Petra Weschenfelder warned of biases in AI training when digitising archives, advocating for ethical data curation with input from affected communities to eliminate sexist and racist outputs by AI systems. Igor Nikolaienko explained the importance of evaluation and observability of generative AI application, presenting his vision of new automated evaluation methods for generative AI using synthetic data and large language models. Dietmar Boeckmann, Tarry Singh, and Mark Stefan discussed AI’s potential in advancing the efficiency of the energy sector and enabling its transformation towards net-zero, while lamenting AI's slow adoption in the energy sector, noting a lack of innovation in Europe. Jacqueline Berger explained how algorithmic bias can lead to unfair and discriminatory outcomes, with the admonition that biases deeply rooted in society cannot be addressed with technology alone (we agree!) Elina Stanek moderated an exciting panel on the environmental impacts of AI, with input from Dr Inez Harker-Schuch and Alice Schmidt that ranged from the immense potential of AI use in education to the energy costs of digitalization, and from power imbalances in the tech economy, to the lack of investment in essential application areas such as education and care Valerie H. discussed AI systems in the context of the EU AI Act, opening up fascinating topics such as the very difficult question of how human oversight can be effectively implemented in high risk AI systems Thank you to the excellent organization and hospitality of DSC DACH 24, and in particular Vuk Ignjatovic and Aleksandar Linc-Djordjevic.

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐄𝐧 𝐑𝐨𝐮𝐭𝐞 𝐭𝐨 𝐌𝐚𝐤𝐢𝐧𝐠 𝐀𝐈 𝐒𝐮𝐬𝐭𝐚𝐢𝐧𝐚𝐛𝐥𝐞 - 𝐓𝐨𝐠𝐞𝐭𝐡𝐞𝐫 The team at leiwand.ai had a blast at the DSC DACH conference in Vienna, which is dedicated to fostering collaboration between data & AI professionals. 👩💻 👨💻 You can find the full report here: https://lnkd.in/etw-ekvP Our CTO Rania Wazir held a keynote on transparency as catalyst for trustworthy and sustainable AI, asking the hard-hitting questions that AI developers, deployers, and society as a whole need to answer with regard to the use of AI. She explored what transparency in AI truly entails — whether it's about documentation, access, or something more — and how it connects to factors like fairness, privacy, and cybersecurity. Rania also questioned whether transparency should be a fundamental right or a privilege, stressing that genuine transparency requires courage but leads to better, more ethical AI decisions. Dr. Julia Zukrigl shared how to convince stakeholders of sustainable data projects by creating human centred emotional visions that show clear benefits and mitigate our fight & flight mechanisms. Magdalena Hutze explained how VERBUND uses AI for energy management, from automating contracts to monitoring fish in hydropower plants, AI systems are not just a fancy add-on, but an essential part of the energy sector. Petra Weschenfelder warned of biases in AI training when digitising archives, advocating for ethical data curation with input from affected communities to eliminate sexist and racist outputs by AI systems. Igor Nikolaienko explained the importance of evaluation and observability of generative AI application, presenting his vision of new automated evaluation methods for generative AI using synthetic data and large language models. Dietmar Boeckmann, Tarry Singh, and Mark Stefan discussed AI’s potential in advancing the efficiency of the energy sector and enabling its transformation towards net-zero, while lamenting AI's slow adoption in the energy sector, noting a lack of innovation in Europe. Jacqueline Berger explained how algorithmic bias can lead to unfair and discriminatory outcomes, with the admonition that biases deeply rooted in society cannot be addressed with technology alone (we agree!) Elina Stanek moderated an exciting panel on the environmental impacts of AI, with input from Dr Inez Harker-Schuch and Alice Schmidt that ranged from the immense potential of AI use in education to the energy costs of digitalization, and from power imbalances in the tech economy, to the lack of investment in essential application areas such as education and care Valerie H. discussed AI systems in the context of the EU AI Act, opening up fascinating topics such as the very difficult question of how human oversight can be effectively implemented in high risk AI systems Thank you to the excellent organization and hospitality of DSC DACH 24, and in particular Vuk Ignjatovic and Aleksandar Linc-Djordjevic.

    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
    • Kein Alt-Text für dieses Bild vorhanden
  • Unternehmensseite von leiwand.ai anzeigen, Grafik

    481 Follower:innen

    𝐑𝐚𝐩𝐢𝐝 𝐚𝐝𝐯𝐚𝐧𝐜𝐞𝐦𝐞𝐧𝐭𝐬 𝐢𝐧 𝐀𝐈 𝐭𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐲 𝐜𝐚𝐧 𝐛𝐞 𝐚 𝐫𝐢𝐬𝐤 𝐭𝐨 𝐟𝐚𝐢𝐫𝐧𝐞𝐬𝐬 📛 In an era where #adaptability and #innovation are paramount, generative AI presents unique opportunities to enhance efficiency, develop new business models, and tackle complex problems. 📈 However, we need to be constantly aware that AI systems are filled with #bias, which already leads to discrimination and inequality of opportunity committed by actively used AI systems, both in the public and private sphere. 🛑 That’s why it’s important to foster learning and collaboration in understanding how #AIsystems require thorough testing for bias when we connect #developers and #users. 𝐀𝐮𝐬𝐭𝐫𝐢𝐚’𝐬 𝐆𝐞𝐧𝐀𝐈 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐤𝐢𝐜𝐤𝐬 𝐨𝐟𝐟 On September 4th, the kick-off of Austria's first future-oriented #GenAI community took place, to foster cross-industry dialogue, aiming to collaboratively learn, reduce mistakes, and harness the rapid development of GenAI for value-driven outcomes.🤝 leiwand.ai CEO Gertraud Leimueller Leimüller was invited to talk on trustworthy AI, drawing attention to the potential disadvantages certain groups can face when AI is developed and deployed. Gertraud was joined by, Linus Kohl from Fraunhofer Austria, Christian Hamböck from viesure innovation center GmbH, and Daniel Valtiner of Infineon Technologies Austria, who all provided their significant knowledge and shared their experiences on working with generative AI. Thanks to Fraunhofer Austria’s Ruben Hetfleisch and Clemens Wasner from AI Austria for hosting this great event and advancing collaboration on AI in the country! 👏 #GenAI #Innovation #Sustainability #trustworthyAI #fairAI

    Unternehmensseite von Fraunhofer Austria anzeigen, Grafik

    5.231 Follower:innen

    👀 ⬅ Event-Rückblick: GenAI-Revolution in Wien! Am 04. September fand der Kick-Off der ersten zukunftsweisenden GenAI-Community Österreichs statt! Fraunhofer Austria Experte Ruben Hetfleisch führte durch einen Abend voller spannender Vorträge. Sein Fraunhofer Austria Kollege Linus Kohl überraschte das fachkundige Publikum mit einer Key Note zur Integration von Causal AI in Sprachmodelle. Neben weiteren Highlights von Christian Hamböck von viesure innovation center GmbH, Daniel Valtiner von Infineon Technologies und Gertraud Leimueller von leiwand.ai wurde der Abend mit gemeinsamen Food & Drinks krönend abgerundet. 🍽 🍷 Sie haben es verpasst? Kein Problem! ➡ Am 05. November steht die zweite Edition an – mit noch mehr zukunftsweisenden Diskussionen! #GenAI #K #Innovation #Nachhaltigkeit #FraunhoferAustria Fotocredit: Fraunhofer Austria AI Austria

    • Kein Alt-Text für dieses Bild vorhanden

Ähnliche Seiten