Orcawise

Orcawise

Business Consulting and Services

Dublin, Europe 3,046 followers

We provide the services, tools & training to ensure your AI systems are ethical and secure.

About us

At the Orcawise Center of Excellence for Responsible AI, you’ll find all the services, tools, and training you need to ensure your AI is ethical, transparent , secure, sustainable.... Our mission is to be your trusted partner in achieving RAI. Founded by CEO, Kevin Neary, Orcawise leads the way in integrating ethical AI practices into modern business workflows. By partnering with Orcawise, you join a collaborative AI community that provides top-notch Responsible AI (RAI) services. We work alongside clients, universities, research institutes, public authorities, big tech companies, and volunteers to build a vibrant environment for RAI. Our goal is to use the collective wisdom of our RAI community to make Orcawise your go-to Center of Excellence for RAI. You’ll receive the knowledge and resources needed to ensure your AI technologies are developed and used ethically, transparently, and in line with global regulations. We help you adopt responsible AI practices with: 1) cutting-edge tools, 2) services with expert consultants / teams, 3) customized bootcamp training programs. Our approach is based on shared expertise and goals, extending our responsible AI center of excellence so that millions of users around the world can enjoy AI powered services that comply with global regulations and guidelines. With hubs in New York, Brussels, and Dublin, you have access to unmatched expertise in building an AI future that is innovative, responsible, and accountable. Join us to redefine enterprise AI, where responsibility and technology meet to create a better future for all.

Industry
Business Consulting and Services
Company size
51-200 employees
Headquarters
Dublin, Europe
Type
Privately Held
Founded
2016
Specialties
Advisory, Responsible AI, Custom LLM Development, Ai Ethics, Transparency, AI Team Augmentation, AI for Marketing, AI Team Outsourcing, Data Scientists, Data Analysts, Data Engineers, ML Engineers, AI Training, RAI Training, EU AI ACT, and US Bill of Rights

Locations

Employees at Orcawise

Updates

  • Orcawise reposted this

    I am delighted to share that I will be speaking on Responsible AI at the CIF Digital Construction Summit on October 22nd at Croke Park in Dublin.   The CIF Digital Construction Summit on October 22nd at Croke Park, Dublin, is a must-attend event for the entire construction sector. The Summit will bring together Ireland’s leading CEOs, thought leaders and other key champions for digital adoption to discuss how the industry must embrace smart construction and digital technology in order to transform itself.   I hope you can join me at the summit, book your tickets here: https://lnkd.in/evshHMBh   View the agenda: https://lnkd.in/eF6vH8Af    #Digicon24 #AI #ResponsibleAI #orcaswise Twitter: CIF_Summits LinkedIn: Construction Industry Federation Summits

    • No alternative text description for this image
  • View organization page for Orcawise, graphic

    3,046 followers

    I am delighted to share that I will be speaking on Responsible AI at the CIF Digital Construction Summit on October 22nd at Croke Park in Dublin.   The CIF Digital Construction Summit on October 22nd at Croke Park, Dublin, is a must-attend event for the entire construction sector. The Summit will bring together Ireland’s leading CEOs, thought leaders and other key champions for digital adoption to discuss how the industry must embrace smart construction and digital technology in order to transform itself.   I hope you can join me at the summit, book your tickets here: https://lnkd.in/evshHMBh   View the agenda: https://lnkd.in/eF6vH8Af    #Digicon24 #AI #ResponsibleAI #orcaswise Twitter: CIF_Summits LinkedIn: Construction Industry Federation Summits

    • No alternative text description for this image
  • View organization page for Orcawise, graphic

    3,046 followers

    🎉 Orcawise Finalist in AI Awards 2024 – Responsible AI and Ethics 🎉 We are thrilled to announce that Orcawise has been selected as a Finalist in the AI Awards Ireland 2024, in the category of Best Use of Responsible AI and Ethics! 🌍🤖 This nomination is a recognition of our commitment to Responsible AI—developing tools and frameworks that ensure AI systems are ethical, transparent, and compliant with global regulations. One of our standout projects is our EU AI Act Custom Model. We have developed an AI-powered solution designed to help organizations navigate the complexities of the EU AI Act. Our model simplifies legal compliance by integrating the principles of responsibility, explainability, privacy, and transparency, ensuring companies meet the highest standards of AI ethics. At Orcawise, we're committed to the power of Responsible AI to shape the future of business and society. Being recognized for this work inspires us to continue our mission of helping businesses deploy AI systems that are not only innovative but also align with ethical practices and global regulations. Thank you to the AI Awards Ireland committee for this incredible recognition, and congratulations to all the finalists! 🚀 #AIAwardsIrl #ResponsibleAI #EthicalAI #AIAwards #EUAIAct #Orcawise #AICompliance #AIEthics

    • No alternative text description for this image
  • View organization page for Orcawise, graphic

    3,046 followers

    🚨 AI Transparency in the Spotlight: OpenAI's Watermarking Dilemma 🚨 Is there a need for watermarking AI-generated content and will it work? OpenAI has developed a text watermarking tool designed to help comply with the upcoming EU AI Act, which mandates that AI-generated content must be clearly marked by August 2026. Despite having the tool ready, OpenAI has hesitated to release it, fearing a potential backlash from users and concerns over its effectiveness against tampering. This raises critical questions about balancing transparency with user experience. How can AI companies ensure compliance without compromising the trust and satisfaction of their users? As we move toward more strong AI regulations, the conversation around responsible AI and transparency has never been more important. Here are some references that also address watermarking: 1) OpenAI System Card for GPT-4 - This document outlines OpenAI’s approach to safety and transparency, including their efforts around watermarking and other ethical concerns. 📍REF: System Card for ChatGPT 2) EU AI Act - The upcoming regulations from the European Union that will require AI-generated content to be marked clearly by 2026. 📍REF: EU AI Act Overview 3) Academic Studies on AI Watermarking - Several studies have explored the concept of watermarking AI content to ensure traceability and transparency. 📍REF: "Survey on watermarking methods in the artificial intelligence domain and beyond" - Preetam Amrit, Amit Kumar Singh 4) User Behavior Studies - Studies on how users interact with AI and the impact of transparency on user trust can be relevant. This shows the importance of transparency and user experience. 📍REF: AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap - Vera Liao, Jennifer Wortman Vaughan What do you think. Is there a need for watermarking AI-generated content and will it achieve the desired results? #AI #OpenAI #Transparency #ResponsibleAI #EUAIAct

    • No alternative text description for this image
  • View organization page for Orcawise, graphic

    3,046 followers

    AI PRIVACY AUDITING: Privacy auditing in AI models assesses whether a model preserves user privacy by protecting personal data from unauthorized access or disclosure. It aims to minimize privacy loss and measure the extent of data protection within the model. 🌐 Recent Developments A recent advancement by Google has introduced an innovative method that significantly improves the efficiency of privacy auditing. This new technique marks a substantial progress compared to older methods that required multiple iterative processes and extensive computational resources. 🌐 Key Features of the New Method - Simultaneous Data Integration: unlike traditional methods that input data points sequentially, this new approach inputs multiple independent data points into the training dataset at once. - Efficient Privacy Assessment: the method assesses which data points from the training dataset are utilized by the model, helping to understand how data is processed and retained. - Validation and Efficiency: it simulates the privacy auditing process by comparing it to several individual training sessions, each with a single data point. This method proves to be less resource-intensive and maintains the model’s performance, making it a practical choice for regular audits. 🌐 Benefits - Reduced Computational Demand: by streamlining the data input process and minimizing the number of necessary simulations, this method cuts down on the computational overhead. - Minimal Performance Impact: it ensures that the model's performance remains unaffected, offering a balance between operational efficiency and privacy protection. This new privacy auditing technique presents a significant improvement, enabling more effective and less disruptive checks on privacy preservation in AI models. Source AI Index 2024. #ResponsibleAI #Orcawise #CIO #CTO #Legal #Compliance #RAI

    • No alternative text description for this image
  • View organization page for Orcawise, graphic

    3,046 followers

    Key Benchmarks for Responsible AI: 🌐 TruthfulQA: Evaluates the truthfulness of responses from AI models. 🌐 RealToxicityPrompts: Measures the degree of toxic output from language models. 🌐 BOLD and BBQ: Analyze biases in AI outputs, ensuring fairness and impartiality. There is a lack of standardized approaches to applying these benchmarks across AI developers. This inconsistency complicates the direct comparison of AI models in terms of responsible AI practices. For example, while TruthfulQA is increasingly used, its application is not uniform across all platforms. Source: AI Index 2024. It important to not just adopt these benchmarks but implement them consistently. This ensures that the AI technologies we develop or advise on are not only advanced but also align with the highest standards of ethical responsibility. ORCAWISE SYSTEMATIC APPROACH 💡 Standardized Testing Implement a standardized set of benchmarks for all AI models. 💡 Transparency in Reporting Ensure clear and consistent reporting on how models perform against these benchmarks is crucial for accountability. 💡Continuous Improvement Benchmark results to continuously refine and improve AI models, ensuring they meet the evolving standards of responsible AI.

  • View organization page for Orcawise, graphic

    3,046 followers

    Responsible AI dimensions with examples: 💡 Data governance EXAMPLE Policies and procedures are in place to maintain data quality and security, with a particular focus on ethical use and consent, especially for sensitive health information. The platform can articulate the rationale behind its treatment recommendations, making these insights understandable to doctors and patients, ensuring trust in its decisions. 💡 Explainability EXAMPLE The platform can articulate the rationale behind its treatment recommendations, making these insights understandable to doctors and patients, ensuring trust in its decisions. 💡 Fairness EXAMPLE The platform is designed to avoid bias in treatment recommendations, ensuring that patients from all demographics receive equitable care. 💡 Privacy EXAMPLE Patient data is handled with strict condentiality, ensuring anonymity and protection. Patients consent to whether and how their data is used to train a treatment recommendation system. 💡 Security and safety EXAMPLE Measures are implemented to protect against cyber threats and ensure the system’s reliability, minimizing risks from misuse or inherent system errors, thus safeguarding patient health and data. 💡 Transparency EXAMPLE The development choices, including data sources and algorithmic design decisions, are openly shared. How the system is deployed and monitored is clear to healthcare providers and regulatory bodies. Source: Stanford AI Index, 2024 #Health #HealthTech #Medical #Medtech #SaaS #Legal #Compliance

    • No alternative text description for this image

Similar pages