𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗻𝗴 𝗠𝗲𝗻𝘁𝗮𝗹 𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗶𝗻 𝘁𝗵𝗲 𝗔𝗴𝗲 𝗼𝗳 𝗠𝗶𝗻𝗱-𝗥𝗲𝗮𝗱𝗶𝗻𝗴 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 Nita Farahany’s 2018 TED Talk on mind-reading technology and privacy highlighted a future that has quickly become our present reality. EEG devices now decode thoughts and moods, and AI can interpret simple brain activity. This evolution underscores the urgent relevance of @Yuval Noah Harari’s concept of "brain hacking", where advanced technologies infiltrate our most private realm — our minds. Farahany's call for a right to cognitive liberty, ensuring control over our thoughts and brain data, is more critical than ever. Harari warns that without such safeguards, our inner thoughts could be exposed, manipulated, or even criminalized. 𝘈𝘴 𝘸𝘦 𝘦𝘮𝘣𝘳𝘢𝘤𝘦 𝘵𝘩𝘦 𝘣𝘦𝘯𝘦𝘧𝘪𝘵𝘴 𝘰𝘧 𝘯𝘦𝘶𝘳𝘰𝘴𝘤𝘪𝘦𝘯𝘤𝘦 𝘢𝘯𝘥 𝘈𝘐, 𝘸𝘦 𝘮𝘶𝘴𝘵 𝘧𝘪𝘦𝘳𝘤𝘦𝘭𝘺 𝘱𝘳𝘰𝘵𝘦𝘤𝘵 𝘰𝘶𝘳 𝘮𝘦𝘯𝘵𝘢𝘭 𝘱𝘳𝘪𝘷𝘢𝘤𝘺. 𝘈𝘥𝘷𝘰𝘤𝘢𝘵𝘪𝘯𝘨 𝘧𝘰𝘳 𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘭𝘪𝘣𝘦𝘳𝘵𝘺 𝘯𝘰𝘸 𝘦𝘯𝘴𝘶𝘳𝘦𝘴 𝘰𝘶𝘳 𝘪𝘯𝘯𝘦𝘳𝘮𝘰𝘴𝘵 𝘵𝘩𝘰𝘶𝘨𝘩𝘵𝘴 𝘳𝘦𝘮𝘢𝘪𝘯 𝘰𝘶𝘳 𝘰𝘸𝘯. 𝘓𝘦𝘵’𝘴 𝘢𝘥𝘷𝘢𝘯𝘤𝘦 𝘵𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘳𝘦𝘴𝘱𝘰𝘯𝘴𝘪𝘣𝘭𝘺, 𝘣𝘢𝘭𝘢𝘯𝘤𝘪𝘯𝘨 𝘪𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘰𝘯 𝘸𝘪𝘵𝘩 𝘵𝘩𝘦 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 𝘳𝘪𝘨𝘩𝘵 𝘵𝘰 𝘵𝘩𝘪𝘯𝘬 𝘧𝘳𝘦𝘦𝘭𝘺. #CognitiveLiberty #Privacy #AI #Neuroscience #MentalPrivacy #Innovation https://lnkd.in/eUJQ-E3s
Roshan Ragel’s Post
More Relevant Posts
-
In an age where technology shapes our lives more than ever, how can we protect our democracy? 🤔 Marietje Schaake, a former Member of the European Parliament and Fellow at Stanford University's Cyber Policy Center and the Stanford Institute for Human-Centered Artificial Intelligence, explores what she believes is one of the most urgent challenges of our time: the erosion of our own democratic institutions. In this thrilling conversation, Schaake discusses the strategies outlined in her new book, 'The Tech Coup: How to Save Democracy from Silicon Valley,' on how to reclaim democratic control in the digital age. Read the full Q&A here 👉 https://t.ly/Fwpzt #AI #TheTechCoup #TechEthics #CogX2024
To view or add a comment, sign in
-
In the face of the upcoming US election, it's essential to be aware of the increasing sophistication and affordability of AI tools used by cybercriminals. From deepfakes to Large Language Models, these tools may be used for financial exploitation and AI-driven social engineering attacks. Our research predicts a surge in cybercrime, with tools like Deepfake 3D Pro, voice cloning services, and live face replacement tools being used to impersonate candidates and endorse scams. Learn more here: https://bit.ly/48opy1v
To view or add a comment, sign in
-
Deepfake fraud is on the rise, the industrialization of social engineering attacks through #GenAI is rapidly moving from the fringe to the normal. Today's phone scammers will rapidly be using GenAI voices, cybercriminals will be using fake employees to subvert traditional security measures. This all means that the way we verify who we are speaking to today is not going to be something we can trust over virtual channels. So what sort of changes do we need to see in collaboration tools and person to person authentication to combat the growing risk of impersonation? Can we make these mechanisms part of the platforms and easy to use? Or are we simply moving into a world where the ability to defraud someone using a deepfake identity will be considered just part of the world? I really hope we can respond, as the alternative is terrible.
Human Identification in a Deep Fake World
link.medium.com
To view or add a comment, sign in
-
🚀 Exciting developments in AI legislation! California's SB 1047 is designed to prevent potential AI disasters before they happen. While there’s no real-world precedent for AI systems harming people or being used in cyber attacks (outside of sci-fi movies), lawmakers are proactively seeking to implement safeguards. However, Silicon Valley experts warn that this bill might lead to unintended consequences and stifle innovation. What are your thoughts on balancing safety with progress? #AIlaw #Innovation #TechNews
To view or add a comment, sign in
-
Brain data laws are crucial to regulate the sensitive neural data collected through techniques like brain-computer interfaces (BCIs). This data, often used in consumer tech and research, can reveal deeply personal information about an individual. Without proper regulation, there are risks of privacy violations, discrimination, manipulation, and security breaches. How can we balance technological advancements with safeguarding personal privacy in this emerging field? Read more: https://zurl.co/jDii #BrainDataPrivacy #Legislation #Neurotechnology
To view or add a comment, sign in
-
Brain data laws are crucial to regulate the sensitive neural data collected through techniques like brain-computer interfaces (BCIs). This data, often used in consumer tech and research, can reveal deeply personal information about an individual. Without proper regulation, there are risks of privacy violations, discrimination, manipulation, and security breaches. How can we balance technological advancements with safeguarding personal privacy in this emerging field? Read more: https://zurl.co/gF8E #BrainDataPrivacy #Legislation #Neurotechnology
To view or add a comment, sign in
-
This is the final episode of a trilogy of critical conversations about the digital revolution. Earlier this week, Gary Marcus explained how to tame Silicon Valley’s AI barons. Then Mark Weinstein talked to us the reinvention of social media. And now we have the former member of the European Parliament & current Fellow at Stanford’s Cyber Policy Center, Marietje Schaake, explaining how we can save democracy from Silicon Valley. In her provocative new book, Tech Coup, Schaake explains how, under the cover of “innovation,” Silicon Valley companies have successfully resisted regulation and have even begun to seize power from governments themselves. So what to do? For Marietje Schaake, in addition to government regulation, what we need is a radical reinvention of government so that our political institutions have the agility and intelligence to take on Silicon Valley.
This is the final episode of a trilogy of critical conversations about the digital revolution. Earlier this week, Gary Marcus explained how to tame Silicon Valley’s AI barons. Then Mark Weinstein talked to us the reinvention of social media. And now we have the former member of the European Parliament & current Fellow at Stanford’s Cyber Policy Center, Marietje Schaake, explaining how we can save democracy from Silicon Valley. In her provocative new book, Tech Coup, Schaake explains how, under the cover of “innovation,” Silicon Valley companies have successfully resisted regulation and have even begun to seize power from governments themselves. So what to do? For Marietje Schaake, in addition to government regulation, what we need is a radical reinvention of government so that our political institutions have the agility and intelligence to take on Silicon Valley.
To view or add a comment, sign in
-
The privacy of our brains ("brain privacy") and our freedom of thought ("cognitive liberty") are increasingly vulnerable in the face of developments in neurotechnology and its commercialisation. My impression, however, is that these matters still figure on the margins of mainstream regulatory discourse, particularly on privacy and data protection. They deserve far more attention. For an illuminating overview of regulatory developments, with lots of links to further reading, see the blogpost by José M. Muñoz: https://lnkd.in/gJFCm8cr
Brain Privacy Rights Are Not Enough—Neurotech Calls for Strengthening Freedom of Thought | TechPolicy.Press
techpolicy.press
To view or add a comment, sign in
-
Trivia I just picked up: There are six core principles that should guide all work around AI: privacy and security, inclusiveness, accountability, transparency, fairness, and reliability and safety
To view or add a comment, sign in
-
Why care about privacy? What's wrong with being constantly surveilled, tracked, and categorized? Well let me tell you – Your humanity is reduced to datapoints that are then judged by quantitative models run by equally unfeeling people. When you are judged by quantitative models working off of data, then there is no longer a need for the parts that make you human: your hopes, fears, dreams, intentions, who you cared for, who you trust. Note this includes the bad parts too, your failures, your misdeeds, and what you have learned from them. All abstracted away and reduced to data for processing by AI. Protecting privacy is akin to protecting humanity.
To view or add a comment, sign in