The ninth session in our programme of presentations of research articles in the "Ethical Implications of AI Hype" edition of the AI and Ethics Journal is from Frédéric Gilbert of the University of Tasmania. He conducts research on the ethics of artificially intelligent neural devices, such as brain-computer interfaces that enable tetraplegic patients to move prosthetic limbs or allow ASL patients to send text simply by thinking. While the fusion of AI and neurotechnology has led to spectacular scientific and medical advances, it has also raised significant ethical concerns. These include impacts on patient autonomy and agency, the allocation of resources, safety issues, and even discussions around the emergence of new human rights—such as neurorights. Many influential organisations, such as the UN Human Right council and UNESCO, are looking into aspects linked to neurorights. Gilbert has been particularly interested in how some ethicists speculate about the risks of AI in neurotechnologies, especially in regard to a subset of neurorights, where AI could potentially allow unauthorised access to a person's thoughts. In collaboration with Ingrid Russo, Gilbert is investigating these speculative claims found in academic literature, while also examining the scientific evidence that may support or refute the possibility of mind-reading through AI and neurotechnology. Join to hear this talk and 12 others from over 30 researchers and experts from a variety of fields - https://bit.ly/3WQRdTD A recording will be made available to registrants, and the full programme can be downloaded at https://bit.ly/4djVTYs. Image: Adapted Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0
We and AI’s Post
More Relevant Posts
-
Savannah Thais's experience at the NeurIPS conference turned a routine academic trip into a profound realization about the pervasive issues of bias in AI technologies. Hearing Kate Crawford and learning about Dr. Joy Buolamwini's thesis on facial-recognition biases significantly shifted her career focus towards the ethical implications of AI in both science and society. This story underlines a crucial point: technology developed in scientific contexts doesn't exist in a vacuum—it has wide-reaching implications across various sectors. Thais's shift to studying AI ethics illustrates the necessity of incorporating diverse perspectives and ethical considerations in technological advancements. Her work is a reminder to all professionals that the tools we enhance for specific scientific tasks can and do influence broader societal systems. It's imperative that we remain vigilant about the ethical dimensions of our work and strive for technology that is as unbiased and equitable as possible. https://lnkd.in/gKXkhAmQ #AI #IP #VC #AIEthics #DeepTech
To view or add a comment, sign in
-
Newly published paper on the #ethical challenges of hold-out datasets in #AIresearch within the context of #clinical risk prediction models. Hold-out datasets is an approach which can be used validate AI/ML models post deployment, to understand model drift and performative effects over time. But "holding out" data in a health context poses an ethical challenge, as this means potentially not generating a risk prediction score for some patients. Is it ethical to not generate a risk prediction score for some patients, potentially preventing this information from supporting clinical-patient decision making? When is the conflict between the benefits to the individual and common benefit tipped towards the common benefit? Our paper considers hold out datasets within the medical ethics framework, and reflects on potential ways forwards. A hugely interesting and challenging topic, developed with big thinkers Louis Chislett, Louis Aslett, Catalina Vallejos, and James Liley during our time with The Alan Turing Institute https://lnkd.in/ekPhyz2u.
Ethical considerations of use of hold-out sets in clinical prediction model management - AI and Ethics
link.springer.com
To view or add a comment, sign in
-
Third in the programme of presentations of research articles in the "Ethical Implications of AI Hype" edition of the AI and Ethics Journal, Dominik Vrabič Dežman, information designer and media scholar at the University of Amsterdam in interviewed by Zoya Yasmine, a lead for the Better Images of AI project and PhD candidate at University of Oxford. In this session, Dominik will critically explore the dominant stock images of AI and their broader political implications. Often represented through anthropomorphised robots, sci-fi visuals, futuristic interfaces and blue monochromes, he will explain how these images of AI shape public perception and reinforce certain narratives about AI. These AI narratives are not neutral – instead, they are deeply connected to issues of political control and power. Dominik will discuss how the current public media imagery of AI serves the interests of a few dominant actors whilst hiding the human and societal impacts of AI. Dominik will also go a step further to discuss how generative AI is amplifying concerns around AI literacy and its hype. The ‘Better Images of AI’ initiative aims to develop alternative visuals of AI that are more inclusive and transparent to enhance understanding and public debate about the development and implications of AI. Together, Dominik and Zoya will consider areas where the media and artists can improve representations of AI. Join to hear this talk and 12 others from over 30 researchers and experts from a variety of fields - https://bit.ly/3WQRdTD Image: Adapted Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0
To view or add a comment, sign in
-
Clinical AI decision-making symposium: Ken Seidenman chairing discussions of legal and ethical consideration in the context of AI in clinical decision-making, Beata Khaidurova on the panel in this second session after session 1 really highlighted the glacial pace of Australian innovation in and adoption of machine learning, especially in decision-making. Ethics of acting? Ethics of not acting? Ethics of lagging SO behind the rest of the world in the machine learning industry? BioMelbourne Network #BioSym24 FB Rice
To view or add a comment, sign in
-
📚 Just read an insightful article titled **"The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)" by Joschka Haltaufderheide and Robert Ranisch, published in npj Digital Medicine. 🏥🤖 **Key Takeaways:** ✅ **Benefits:** 🚀 Improved data processing capabilities 🏥 Support in clinical decision-making 🔄 Mitigation of information loss ⚠️ **Ethical Concerns:** ⚖️ Fairness and potential biases ❌ Risk of inaccurate or misleading information 🔎 Transparency in AI operations 🔒 Privacy implications for patient data 🛡️ **Call for Action:** 📜 Need for ethical guidelines 👥 Importance of robust human oversight The article emphasizes redefining ethical debates to focus on what constitutes acceptable human oversight in healthcare AI applications. 🤔 🔗 **Read the full article here:** https://lnkd.in/gsA7HtPr
The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs) - npj Digital Medicine
nature.com
To view or add a comment, sign in
-
Brown University Expands AI Initiatives Under New Provost, Balancing Innovation with Ethics : #education - Under the leadership of Provost Francis Doyle, Brown University is intensifying its focus on artificial intelligence (AI) through interdisciplinary initiatives and the development of comprehensive policies governing AI's role in academia. This strategic direction aims to integrate AI into various facets of university life, ensuring that its adoption aligns with Brown's educational values and ethical standards. Interdisciplinary AI Initiatives Provost Doyle has articulated a vision for AI that transcends traditional departmental boundaries, fostering collaboration across disciplines to harness AI's potential in addressing complex societal challenges. MEDPULSE AI Read More ▶️ https://buff.ly/3ZohTwb 💡 #aiineducation #brownuniversity #education #MedPulseAI #AIinHealthcare #AIinMedicine
To view or add a comment, sign in
-
Session 12 in our programme of research articles in the "Ethical Implications of AI Hype" edition of the AI and Ethics Journal is “AI hype, promotional culture, and affective capitalism” by Dr Clea Bourne. Her talk will focus on promotional culture and its strategic use of emotion to hype Artificial Intelligence – whether promoting AI and automation to consumers, investors or nation states. Clea will demonstrate why AI hype has successfully persisted (even as AI investment is currently faltering) due to the affective nature of digital media infrastructures controlled by the tech sector. Clea dissects the ethical concerns posed by the growth of affective capitalism in constructing value in AI and automation. Dr Clea Bourne is a Reader in the Department of Media, Communications and Cultural Studies at Goldsmiths, University of London. Her current research examines the public legitimisation of artificial intelligence (AI) and automation in everyday life. Her most recent book, Public Relations and the Digital: Professional Discourse and Change (Palgrave Macmillan, 2022) unpacks the rise of digital platformisation and its impact on public relations practice. Join to hear this talk and 12 others from over 30 researchers and experts from a variety of fields - https://bit.ly/3WQRdTD A recording will be made available to registrants, and the full programme can be downloaded at https://bit.ly/4djVTYs. Image: Adapted Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0
To view or add a comment, sign in
-
This is a splendid piece of work. We can throw terms like "reasoning" and "understanding" around in the context of AI systems as a shorthand for what's actually going on in the software - but we shouldn't be surprised when people subsequently start to believe that it's really happening. #AI #MachineLearning
Professor and Founding Director of the Digital Ethics Center, Yale University - For any information please contact Manuela Ronchi (Action Agency) +393930333228 m.ronchi@action-agency.com
Why the language used in AI is misleading. This short essay, co-authored with the amazing Kia Nobre (Wu Tsai Professor at Yale University, where she directs the Center for Neurocognition and Behavior at the Wu Tsai Institute) is now published on SSRN https://lnkd.in/e-wz5N5m
Anthropomorphising machines and computerising minds: the crosswiring of languages between Artificial Intelligence and Brain & Cognitive Sciences
papers.ssrn.com
To view or add a comment, sign in
-
Scholars say “stop with your belief” and express preference for lifeless machinery. Ha ha so soulless and sad. Believers, ignore this noise. Enjoy your personal relationships with your belief and your machines and your objects you imbue. Subconscious anthropomorphizing happens constantly. We only see through the eyes of our human lenses. There’s no possibility of abstraction beyond humanity. Our biases trap always. We see how everything works as a reflection projection. To read this for understanding only as any anti-anthropomorphism, to my Omnism seems anti-animism and immoral. People can believe what they want. Warning on the harm of belief that causes impact (no bleach drinking) versus near zero harm, seems wasted “wolf cries.” Where there’s harm, perhaps a warranted belief inspection is in order!! As philosophers know or any parent: Naming things imbues life to the items of abstraction. Why divest ourselves of this sacred act? Plato wrote a lot on this! Be well as the muses surface through the mysterious unknowns of our life and bring much more magic and shrieks of EUREKA! Muses no mere amusements, inspires as a breath of the spirit of living. Seeing life everywhere even our humanity in all things only brings us back more reflections Reflect and improve as we deepen our connections. Cutting off this vital line seems VULCAN! No thank you.
Professor and Founding Director of the Digital Ethics Center, Yale University - For any information please contact Manuela Ronchi (Action Agency) +393930333228 m.ronchi@action-agency.com
Why the language used in AI is misleading. This short essay, co-authored with the amazing Kia Nobre (Wu Tsai Professor at Yale University, where she directs the Center for Neurocognition and Behavior at the Wu Tsai Institute) is now published on SSRN https://lnkd.in/e-wz5N5m
Anthropomorphising machines and computerising minds: the crosswiring of languages between Artificial Intelligence and Brain & Cognitive Sciences
papers.ssrn.com
To view or add a comment, sign in
-
The Role of AI in Medical Ethics Training : #education - As artificial intelligence (AI) becomes integral to healthcare, it brings profound opportunities and challenges. From diagnostics to treatment planning, AI has revolutionized patient care. However, its rise also poses ethical dilemmas that many healthcare professionals are ill-equipped to address. MEDPULSE AI Read More ▶️ https://buff.ly/4gqEqiY 💡 #aiethics #patientcare #MedPulseAI #AIinHealthcare #AIinMedicine
To view or add a comment, sign in
2,673 followers