🧐Tencent's Open-Source LLMs, XPeng's New AI Chip & Robot, and AI Safety Insights from Former OpenAI VP on Bilibili
Weekly China AI News from November 4, 2024 to November 10, 2024
Hi, this is Tony! Welcome to this week’s issue of Recode China AI, a newsletter for China’s trending AI news and papers.
Three things to know
Tencent’s Hunyuan-Large: A Powerful Open-Source MoE LLM
What’s New: Tencent last week open-sourced its latest LLM, Hunyuan-Large, a Mixture of Experts (MoE) model featuring 389 billion total parameters and 52 billion activated parameters. This model is currently the largest open-source MoE model based on the Transformer architecture. Capable of handling up to 256K tokens, Hunyuan-Large leads across multiple benchmarks, outperforming other top-tier open models like Llama3.1 and Mixtral.
The release includes Hunyuan-A52B-Pretrain, Hunyuan-A52B-Instruct, and Hunyuan-A52B-Instruct-FP8, all of which are available on Hugging Face. Notably, the model is free for commercialization without any restrictions.
How It Works: Hunyuan-Large was pre-trained on 7 trillion tokens. This dataset comprises both natural and approximately 1.5 trillion tokens of synthetic data, which plays a crucial role in improving the model’s understanding in specialized fields such as mathematics and coding.
Key innovations of Hunyuan-Large include KV cache compression through Grouped-Query Attention (GQA) and Cross-Layer Attention (CLA), which reduce the memory needed for storing key-value pairs by up to 95%. Additionally, the recycle routing mechanism helps redistribute tokens from overloaded experts to those with available capacity, ensuring balanced workloads and minimizing information loss during training.
Hunyuan-Large also employs expert-specific learning rate scaling, applying distinct learning rates for shared and specialized experts. This approach helps preserve critical information while optimizing training, ensuring that both general and specialized capabilities are developed effectively.
Hunyuan3D-1.0: Tencent has also introduced Hunyuan3D-1.0, a model supporting text- and image-conditioned 3D generation. Evaluations show that Hunyuan3D-1.0 achieved the highest user preference across five metrics compared to other open-source 3D methods.
XPeng Flexes AI Muscles with New Chip and Humanoid Robot
What’s New: Chinese EV upstart XPeng showcased an impressive array of AI-driven technologies for mobility and robotics at its AI Day last week. Highlights included XPeng's proprietary Turing AI chip for autonomous driving and the debut of Iron, a new humanoid robot.
Tell Me More: Following in Tesla’s footsteps with in-house car chips, XPeng introduced its Turing AI chip, featuring a 40-core processor capable of managing models with up to 30 billion parameters. CEO He Xiaopeng said the chip’s computing power is equivalent to that of three Orin X, Nvidia’s last-generation automotive chips. With this innovation, XPeng plans to launch an L3+ smart driving feature — allowing the driver to take their hands off the wheel and eyes off the road under specific conditions — within 18 months. To further its ambitions in autonomous driving, XPeng also unveiled Ultra, an autonomous vehicle designed without a steering wheel, targeting the growing global robotaxi market.
Drawing comparisons to Tesla’s Optimus, XPeng also introduced Iron, an advanced humanoid AI robot with over 60 joints and 200 degrees of freedom. XPeng said the robots have already been deployed at XPeng’s factories and stores.
Other major announcements included the Kunpeng Super Electric System, which features a range extender enabling vehicles to drive over 1,400 km. The system also offers ultra-fast charging—adding 1 km of range per second and reaching 80% capacity in just 12 minutes. The AI Battery Doctor actively monitors and optimizes battery health, extending battery life.
XPeng also presented XPeng AIOS, an in-car operating system integrating GPT-4o (and likely an equivalent LLM for the mainland China market). Powered by dual Turing AI chips, AIOS learns user habits and adapts in real-time, delivering a highly interactive and personalized experience. XPeng plans to roll out a chip upgrade program for existing customers to enhance driving and cockpit capabilities.
Recommended by LinkedIn
Why It Matters: XPeng is doubling down on AI and smart driving technologies to stand out in China’s highly competitive EV market. In the first ten months of 2024, XPeng delivered over 120,000 smart EVs, largely driven by the popularity of its $16,000 MONA M03 model. Despite this momentum, the company remains behind its 2024 sales target of 200,000 units.
OpenAI VP Advocates AI Safety in China Before Departure
What’s New: Lilian Weng, OpenAI’s Vice President of Research (Safety), announced her departure from OpenAI last Friday. Her exit is the latest addition to a series of key safety researchers’ departures, raising questions about OpenAI’s commitment to safety research.
A graduate of China’s elite Peking University, Weng joined OpenAI in 2017 and played a key role in GPT-4’s development. Her famous formula, “Agent = LLM + memory + planning skills + tool use” — has been influential in shaping discussions around AI behavior. In October, she was tasked to lead OpenAI’s safety-focused preparedness team.
Just a short time ago, Weng surprisingly appeared at a science-focused event organized by Bilibili, China’s long video-streaming platform. She delivered a keynote on “AI Safety and Nurturing” — a speech that explored the concept of training AI much like we guide the next generation of humans. Weng emphasized the importance of shaping and educating AI to ensure that it serves humanity well while maintaining safety. You can watch the full speech here.
More on Speech: In her keynote, Weng spoke about AI safety and alignment, which are increasingly critical as AI becomes more advanced. She compared AI’s development to human growth: AI must be carefully guided to avoid biases and adversarial exploitation. She cited healthcare as an example, explaining how AI models can misdiagnose due to biased training data. For instance, many health datasets are male-focused, which can lead to inaccurate risk assessments for women. The need for comprehensive, unbiased data was a central point in her approach to AI safety.
Weng also explained how Reinforcement Learning from Human Feedback (RLHF) is key to refining AI behavior. She likened RLHF to training a dog: rewarding the right behavior helps the model learn desired actions. By continually receiving feedback, the AI adjusts to better align with human values and expectations. Reinforcement learning, she noted, allows for finely-tuned responses and even self-assessment, such as in models using the Constitutional AI framework where the AI reviews its own outputs for adherence to ethical standards.
Wang noted that in OpenAI’s recent Open-o1-preview model, Chain-of-Thought (CoT) reasoning was implemented to enhance reliability and resistance to jailbreak attacks. Scalable oversight, combining automated tools with human supervision, helps guide AI growth effectively. Public studies showed that AI-assisted annotators identified about 50% more issues in text summaries compared to those working without AI support.
Weekly News Roundup
Trending Research