💻 On 15 October, the Luxembourg Competition Authority attended the Disinfo day about the impact of #AI on #disinformation organised by Remedis (INTER) Project and EDMO Belux at RTL Luxembourg. 👨💻 Among other topics, the participants addressed how #AI may be a source of both #misinformation and #disinformation and how #generativeAI is likely to amplify these issues. 👩🏫 Participants also noted how AI-generated deepfake images, sounds, and videos put fact-checkers at risk of being manipulated themselves. 📘 The Competition Authority has issued a guide to help internet users spot misinformation and act against potentially illegal content, such as hate speech and deep fakes. 🔗 ➡ https://lnkd.in/g35Xm9j4
Autorité de la concurrence du Grand-Duché de Luxembourg’s Post
More Relevant Posts
-
I always wondered whether anyone has researched or investigated the role of media induced perception in the development (real and or perceived) of artificial intelligence (AI) and AGI in particular. “A major highlight of its findings is that the time respondents expect it will take to develop an AI with human-level performance has dropped by between one and five decades since the 2022 survey was taken. This significantly accelerated timeline is largely due to ChatGPT’s global impact in 2023, though other factors also likely play a role. “ https://lnkd.in/dgiMN4e2
To view or add a comment, sign in
-
While it is still unclear what impact generative AI & deepfakes will have on news and information, that question has long been answered in other domains. There are 550% more deepfake videos online in 2023 than in 2019. An astonishing 98% of these deepfake videos are explicitly pornographic. Or how AI enables the abuse of women at an unprecedented scale and speed. Journalist and activist Luba Kassova calls for regulatory action: https://lnkd.in/egZ_38U5 Stats: https://lnkd.in/egtVdFGX
To view or add a comment, sign in
-
Dive into a thought-provoking article that explores the unnerving yet intriguing world where artificial intelligence interacts with explicit digital content - 'Pimping AI'. As technology advances, so does the ease of creating deepfakes with the aid of complex AI systems. It's a meeting point for ingenuity and malevolence, resulting in manipulative illusions that amplify 'technological predation'. But how well-equipped are our favourite social media platforms to tackle this issue? And what does it mean for the victims, particularly women who find themselves at the centre of these heinous acts? Head over to this link and join the conversation. Let's strive for responsible #AI and safer online spaces. #Deepfakes #DigitalSafety #ArtificialIntelligence #TechEthics https://lnkd.in/eWm3i3pA
To view or add a comment, sign in
-
How should UK policy makers deal with the threat deepfakes pose to our society and democratic process? Today we launch our whitepaper in which Henry Parker explores the impact of generative AI on disinformation, why solutions to tackling the problem have to go beyond watermarking, and what the UK Government and social platforms can do to protect us. https://lnkd.in/eQYZ9qkR #deepfakes #AI #generativeAI
To view or add a comment, sign in
-
#Breaking - Authorities investigate alarming surge of bigoted text messages across the US Read more: https://lnkd.in/eXMz3c_D #alarm #text #usa #technology #ai #misinformation This article has been fact-checked by Oigetit ✅ www.oigetit.ai - Using AI to fight Misinformation & Fake News
To view or add a comment, sign in
-
Anthropic is the only company that developed what they call 'Constitution AI' for Claude LLM, which are essentially guardrails/controls/best practices for ethical and impactful use of AI. This was a major research program which solicited input from the industry and policy makers at large. I'm betting that there will be a similar 'Constitutional AI' pack that can be LLM or any language model agnostic. I started work in this area, made significant progress and looking for POVs to make it a holistic solution. Feel free to contribute via comments or DM me. On a side note, identity management is becoming complex for Generative AI applications. It will be interesting to see how identity evolves in this space. #responsiblerisk #responsibleAI #AISGRC
To view or add a comment, sign in
-
The numbers don’t lie. Tenable is ranked #1 in Device VM market share for the sixth consecutive year, according to IDC. The report highlights Tenable’s use of generative AI, noting, “ExposureAI, available as part of the Tenable One platform, provides GenAI-based capabilities that include natural language search queries, attack path and asset exposure summaries, mitigation guidance suggestions, and a bot assistant to ask specific questions about attack path results.” https://oal.lu/n7O9Z
To view or add a comment, sign in
-
The numbers don’t lie. Tenable is ranked #1 in Device VM market share for the sixth consecutive year, according to IDC. The report highlights Tenable’s use of generative AI, noting, “ExposureAI, available as part of the Tenable One platform, provides GenAI-based capabilities that include natural language search queries, attack path and asset exposure summaries, mitigation guidance suggestions, and a bot assistant to ask specific questions about attack path results.” https://oal.lu/qNhXu
To view or add a comment, sign in
-
This article highlights some of the less grim possible examples of deepfakes, but it still raises questions about the ethics of using deepfakes after a person has passed. What regulations need to be in place to ensure dignity and respect in digital immortality? Read the article via Science News Magazine: https://bit.ly/4bJB1ZY #DeepFakes #AI #AIEthics #AIRegulation #digitalimmortality
To view or add a comment, sign in
-
"#Artificial Intelligence (AI)" or "#Addictive Intelligence (AI)" In a recent article published in MIT Technology Review, Robert Mahari & Pat Pataranutaporn wrote an article titled "We need to prepare for ‘addictive intelligence’". They alarmed the people about the allure of AI companions, which is hard to resist, as well as brouhgt up the issues of how innovation in regulation can help protect people. They foresee a different, but no less urgent, class of risks: those stemming from #relationships with #nonhuman agents. AI companionship is no longer theoretical—our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers. For complete reading, please connect to: https://lnkd.in/gfkPZmRm
We need to prepare for ‘addictive intelligence’
technologyreview.com
To view or add a comment, sign in
1,510 followers
Thank you for attending the event!