MIT Releases Report on Challenges of “Responsible AI”
MIT Releases Report on Challenges of “Responsible AI”
Rohit Mahajan
The MIT Sloan Management Review and Boston Consulting Group (BCG) have released an enlightening report on the use and development of responsible AI (RAI) tools that any organization using third-party suppliers of AI software needs to be aware of.
With the rise in popularity of third-party or "open access" generative AI tools such as ChatGPT and its ilk, the ethical use of AI and the challenges of RAI have been thrust into the headlines. According to the framers of the MIT/BCG report, "Countless examples have emerged of the chatbot fabricating stories, including falsely accusing a law professor of sexual harassment and implicating an Australian mayor in a fake bribery scandal, leading to the first lawsuit against an AI chatbot for defamation. In April, Samsung made headlines when three of its employees accidentally leaked confidential company information, including internal meeting notes and source code, by inputting it into ChatGPT. That news prompted many companies, such as JPMorgan and Verizon, to block access to AI chatbots from corporate systems."
This unprecedented, and some would even say the reckless pace of AI advancements is making it harder to use AI responsibly and is putting pressure on RAI programs to keep up. The report specifically warns of organization’s growing dependence on the burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI — algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio. The MIT/BCG report says that the use of such tools internally is exposing businesses to new commercial, legal, and reputational risks that are concerning and difficult to track.
The researchers discovered that the vast majority of organizations they surveyed used these kinds of third-party AI tools, and a majority rely on them exclusively, having no internally developed AI of their own. This puts them at particular risk since, in many cases, "managers may lack any awareness about the use of such tools by employees or others in the organization — a phenomenon known as shadow AI.”
The authors of the report say that the fundamental issue is that organizational RAI programs are struggling to keep pace with technological advancements in AI. These advancements are growing the ecosystem of available third-party AI solutions and making it easier to use AI throughout the organization, but they are also expanding the scope and complexity of risks that RAI programs must address.
Last year the same research group published a report titled “To Be a Responsible AI Leader, Focus on Being Responsible,” in which they concluded that successful RAI efforts actually might have more to do with being a responsible organization than they do with AI as a technology. This year, they have followed up with this new report Building Robust RAI Programs as Third-Party AI Tools Proliferate, where the team has focused more narrowly on the extent to which organizations are addressing risks stemming from the use of internally and externally developed AI tools, such as generative AI.
Recommended by LinkedIn
How BigRio Helps to Support Responsible AI Solutions
At BigRio, we understand how AI can help companies solve complex business challenges and identify new opportunities. However, we also recognize that organizations need to be able to trust this technology.
This is why we are dedicated to providing solutions and supporting AI, Digital Health companies and startups that focus on the ethical and responsible use of AI. We are committed to the practice of building AI solutions that have clear and transparent rules on how they process data and use it to provide your company with the insights you need to be more productive from an ethical, responsible, and legal point of view.
Digital Health companies and AI Startups face numerous challenges when it comes to demonstrating their value proposition, not the least of which is RAI. This is particularly true when it comes to advanced AI solutions for pharma and healthcare – an area that BigRio focuses on. We have taken an award-winning and unique approach to incubating and facilitating startups that allow the R&D team and stakeholders to efficiently collaborate and craft the process to best suit actual ongoing needs and particular challenges of AI adoption in the healthcare field.
You can read much more about how AI is redefining healthcare delivery and drug discovery in my new book Quantum Care: A Deep Dive into AI for Health Delivery and Research. It’s a comprehensive look at how AI and machine learning are being used to improve healthcare delivery at every touch point, and it discusses many of the same issues raised in the MIT/BCG report.
Rohit Mahajan is a Managing Partner with BigRio. He has particular expertise in the development and design of innovative solutions for clients in Healthcare, Financial Services, Retail, Automotive, Manufacturing, and other industry segments.
BigRio is a technology consulting firm empowering data to drive innovation and advanced AI. We specialize in cutting-edge Big Data, Machine Learning, and Custom Software strategy, analysis, architecture, and implementation solutions. If you would like to benefit from our expertise in these areas or if you have further questions on the content of this article, please do not hesitate to contact us.
Co-Founder OxbridgeAI - Ex/Pfizer Ex/Proctor & Gamble - Generative AI #RAI #Responsible AI #Ethical AI #LLMs- Fractional Talent Attraction - Executive Search Management Consultant
1yits an interesting piece, isn't it Rohit, as with BCG are you seeing RAI as top priority for your Clients?