Does AI need to have high ethical standards?

Does AI need to have high ethical standards?

Is AI just another tool that doesn’t need high ethical standards? Is it replacing humans at jobs? Do we trust it yet?

Ethical practices, Empathy and Algorithmic bias in AI is a much-needed conversation for today and the Synthetic Intelligence forum’s event hosted by Vik Pant at Mars delivered exactly that last night.   Chris Mckillop of Turalt invoked several layers of empathy and ethical concerns with AI, including the need to have introspective practices above ethical checklist for companies diving headfirst into the Machine learning solutions. Carter Cousineau of Care.ai provided a peek into responsible yet innovative systems that are compatible with human values. Lastly, Natalia Modjeska of the Info-Tech research group presented detailed research findings on arithmetic bias and the ways to mitigate it in AI.

It seems to me that, AI solutions reflect the very complex human nature, including that of bias and ethical standards. As Natalia pointed out, the Trolley problem of a child vs elderly, autonomous cars prefers to run over the elderly in western countries as opposed to young in Japanese countries, following their culture of respecting the elderly. We don’t have a perfect society now, and data sets fed into the AI makes the machine predict trends that are not based on progressive thinking or ethical. This is exactly what we want to do in Rootquotient, to take accountability in navigating AI solutions for our clients. We need to have diversity in our team, educate ourselves and our customers and lastly, introspect our narratives in solving a problem with AI.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics