Weekly: AI & Tech Insights

Weekly: AI & Tech Insights

Nvidia Announces Next-Generation AI Chip Platform

Nvidia CEO Jensen Huang has revealed the company's upcoming AI chip platform, named "Rubin," which is set to launch in 2026. Although specifics about the Rubin chip family remain under wraps, Huang announced that Nvidia plans to release a new AI chip family annually moving forward.

The Importance of Terminology in AI

A recent paper by Dr. Gina Helfrich from the University of Edinburgh emphasizes the significance of accurate terminology in the AI industry. She argues that the term "frontier AI" should be retired, as it distracts from current social, psychological, and environmental harms caused by large language models. Helfrich states, “Profit. Danger. Outer space. Progress. These are the connotations of ‘frontier AI.’ It should be obvious that ‘frontier AI’ is an exercise in AI hype, given these connotations.” .

Is Deep Learning Hitting a Wall?

Cognitive scientist Gary Marcus has reiterated his 2021 claim that deep learning is facing diminishing returns. He argues that despite significant investments and purported advancements, issues like hallucinations, unreliability, and poor reasoning remain unsolved. Marcus’ critique comes amid debates on whether the key to general intelligence lies in more data or fundamentally different approaches.

AI Workers Accuse U.S. Big Tech of Modern-Day Slavery

AI data labelers and content moderators in Kenya have accused major U.S. tech companies of abusive practices. In an open letter, 97 workers described their mentally and emotionally draining tasks, often performed for less than $2 per hour. They called on President Biden to hold these companies accountable for violating local labor laws and international standards. Meta declined to comment, and OpenAI did not respond.

Microsofts Global Datacenter Expansion

Microsoft has announced a $3.2 billion investment to expand its cloud and AI infrastructure in Sweden, deploying around 20,000 advanced chips. This move follows similar investments in the U.K., Germany, and Spain. Additionally, Microsoft has pledged to operate its global datacenters sustainably, aiming for 100% renewable energy use by 2025.

Open Source and the Danger of Open Washing

A new study highlights the issue of "open washing" in AI, where companies claim their models are open-source without adhering to true transparency. The EU AI Act’s exceptions for open-source systems make this a critical issue. The study suggests that while full openness isn’t always the solution, transparency about what is open and how open it is can lead to better decision-making.

The Impact of AI Doomerism

AI researcher Francois Chollet has expressed concern over AI doomerism, where people believe AI will lead to humanity’s extinction in 10-20 years. Chollet argues that this mindset drives irrational behaviors and diverts attention from real, current risks. He emphasizes that current AI technologies are not capable of becoming out-of-control superintelligences.

Sam Altman on AI and the Future Social Contract

OpenAI CEO Sam Altman discussed the need for a new social contract to manage AI’s impact on society at the UN’s AI for Good summit. He believes AI will benefit the poorest more than the richest, a view contested by experts who fear AI could exacerbate inequality. Altman also stressed the importance of viewing AI as a powerful tool that requires careful regulation and societal reconfiguration.

NASA’s New AI Weather Forecasting Model

NASA, in collaboration with IBM Research, has developed the Prithvi-weather-climate model for better storm tracking and climate forecasting. Trained on 40 years of data, the model promises to enhance understanding of atmospheric dynamics and improve public safety through more precise weather predictions.

New York Takes on Social Media Algorithms

New York state is set to vote on legislation targeting social media algorithms, aiming to reduce their addictive nature by prohibiting automated feeds and overnight notifications for minors without parental consent. This move comes in response to rising concerns about the mental health impacts of social media on young people.

Financial Incentives vs. Responsible AI

A group of current and former OpenAI employees have published an open letter stating that financial incentives in the AI industry often conflict with responsible governance. They call for better regulatory oversight, whistleblower protections, and more transparency from AI companies regarding their capabilities and safety measures.

Zoom CEO Envisions a Future of Digital Clones

Zoom’s CEO, Eric Yuan, envisions a future where AI-powered digital clones can handle 90% of work tasks, making real-time interactions more efficient. While this idea brings up numerous ethical and practical questions, Yuan believes AI advancements will soon make such capabilities feasible.

NOAA’s AI-Driven Rip Current Forecasts

NOAA’s experimental AI tool predicts rip current probabilities up to six days in advance, aiming to improve public safety by reducing drowning incidents. The model uses wave and water level data to make accurate predictions, potentially saving lives through better-prepared responses to dangerous currents.

AI Performance in Medical Imaging: Worse than Random

A study from the University of California, Santa Cruz, found that leading AI models for medical visual question answering perform worse than random guessing on diagnostic questions. This highlights significant limitations in current evaluation methods and the need for more robust testing to ensure the reliability of AI in healthcare.

GenZs Hesitation Towards AI

A survey by Hopelab and Common Sense Media reveals that while half of young people have used generative AI, only 4% are daily users, and many are concerned about privacy or unsure how to use these tools. The study suggests that involving young people in the development of AI tools is essential for their future integration into society.

Microsoft’s AI-Enabled Recall Faces Criticism

Microsoft’s new AI feature "Recall" for its Copilot+ PCs, which takes screenshots of user activity, has been criticized for its security flaws. Cybersecurity researcher Kevin Beaumont discovered that Recall stores data unencrypted, making it vulnerable to hacking. Beaumont urges Microsoft to rework the feature to ensure user privacy and security.

Australian Scientists Fight Fires with AI

Researchers at the University of South Australia have developed an AI algorithm for cube satellites that can detect wildfires 500 times faster than current methods. This advancement aims to provide early warnings, allowing firefighters to respond more quickly and effectively to prevent large-scale wildfires.

Helen Toner on AI: Don’t Be Intimidated

AI expert Helen Toner encourages citizens to engage with AI technology without intimidation. She advises asking critical questions about how AI affects and could benefit one's life, stressing the importance of understanding and influencing the technology’s development and impact.

AI and Surveillance Capitalism

Meredith Whitaker argues that the current AI boom is driven by the surveillance business model of Big Tech companies. These companies have leveraged their platform and cloud monopolies to amass vast amounts of data and infrastructure, entrenching themselves in the AI landscape and raising significant privacy and ethical concerns.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics