We're standing at a fascinating threshold - industry experts suggest we're just 5 years away from AI systems that can improve their own code. By 2030, we might see AI reaching 90% expertise across multiple fields simultaneously - something no human can match. Think about that: an intelligence that combines the capabilities of our best physicists, chemists, and artists. We've never seen anything like this before in human history. Looking at this trajectory, we have to ask ourselves: how will this super-intelligent future transform what it means to be an expert, or even what it means to be human?
Briar Prestidge’s Post
More Relevant Posts
-
The debate surrounding Gödel's theorems often includes the argument that human intelligence is non-algorithmic, allowing humans to intuitively grasp truths that machines cannot. While Gödel's theorems imply that there are statements beyond formal proof, it does not definitively rule out the possibility of AI achieving similar insights through external information or probabilistic reasoning. This perspective suggests that human-like reasoning could potentially be modeled algorithmically, challenging the notion that human cognition is fundamentally different from machine processing
To view or add a comment, sign in
-
Artificial intelligence usage is rapidly increasing. Erik Brynjolfsson, one of the most-cited authors on the economics of information and an expert on AI, spoke at one of Insight's recent summits. Here we present his arguments on why embracing AI’s creative destruction may offer solutions to society’s challenges. Read more here: https://bit.ly/3x1DqRf Capital at risk. For professional investors only.
To view or add a comment, sign in
-
Another interesting article on building machines that can learn and think with people (human). The authors try to show how the collaborative cognition can be used to create AI systems that can be seen as real "thought partners" of human kind by fulfilling the criteria we expect from such systems (like reasonable, insightful, knowledgeable, reliable and thrustworthy enough to think with us). The mentioned expectations seem to be science-fictional somehow but by giving examples of ecosystems of thinking (including both of humans and machines) and methods like Bayesian Thought Partner Toolkit, they try to manage it.
To view or add a comment, sign in
-
Here are 4 Things Generative AI can't do today. There are some things that are easy for humans, that are difficult for the current generation of LLMs. Today LLMs: 1️⃣ Cannot avoid hallucination 2️⃣ Cannot predict the accuracy of their answers 3️⃣ Cannot apply complex rules as intelligently as a human 4️⃣ Cannot decide to break a rule as intelligently as a human 💡
To view or add a comment, sign in
-
In artificial intelligence, we strive for systems that can understand, learn, and adapt. Isn’t that the quest of human existence, too? To understand the world around us, learn from our experiences, and adapt to the ever-changing landscape of life. Each of us is an AI in the flesh, running on the software of consciousness, navigating the complex dataset of reality.
To view or add a comment, sign in
-
"Can you respond to my question?" - what question you will ask? Yet this is the most typical problem in generative AI: users want computers to read their mind! They use very terse prompt with not enough information to properly answer questions, then they blame RAG or the LLM or hallucinations. But best idea is: write a long and better prompt question!
To view or add a comment, sign in
-
Regarding the latest AI takeover in the creative sector, what are your thoughts? ☣
To view or add a comment, sign in
-
In discussing general intelligence and AI approaches with my new co-founders today, I randomly remembered Ben Goertzel's "Between Ape And Artilect", page 28. What fired together, wired together, ey... Anyhow, when investigating what general intelligence is. Don't just ask yourself what the necessary and sufficiency ingredients (components) are, rather try to create a taxonomy of intelligences, and not just with a hard divide between narrow, brittle and general and robust.. but everything in between and especially also consider different general AI candidates. "AGI" is not an on and off switch. There is threshold of capability beyond which different forms of general ability can arise.
To view or add a comment, sign in
-
🔥Hot Off the Press!🔥 The CEO of Anthropic proposes a fascinating concept - the future of AI could be a hive-mind embedded in a corporate structure! 🕴️🤯 This thought-provoking new direction could entirely revolutionise the way AI systems work. Curious to know more? 👉 Check here:
To view or add a comment, sign in
-
"We need a system that can generalize with much less data, much less resources, be able to form concepts in real-time. Understanding how the mind works (not the brain) is critical to design such intelligent system." AI Pioneer Peter Voss sharing his insights on how to build an intelligent system. Excited and looking forward to the full-episode tomorrow Jason Scharf.
To view or add a comment, sign in
Master Future Tech (AI, Web3, VR) with Human Impact| CEO & Founder, Top 100 Women of the Future | Award winning Fintech and Future Tech Influencer| Educator| Keynote Speaker | Advisor| (ex-UBS, Axa C-Level Executive)
2moThe big question also Yuval Harari is raising in his books- what does it mean to be a human?