🧐Tencent's Open-Source LLMs, XPeng's New AI Chip & Robot, and AI Safety Insights from Former OpenAI VP on Bilibili

🧐Tencent's Open-Source LLMs, XPeng's New AI Chip & Robot, and AI Safety Insights from Former OpenAI VP on Bilibili

Weekly China AI News from November 4, 2024 to November 10, 2024


Hi, this is Tony! Welcome to this week’s issue of Recode China AI, a newsletter for China’s trending AI news and papers.

Three things to know

  • Tencent open-sources its latest MoE LLM Hunyuan-Large.
  • XPeng showcased its proprietary Turing AI chip and humanoid robot Iron.
  • Former OpenAI VP Lilian Weng speaks about AI safety at a Bilibili event.


Tencent’s Hunyuan-Large: A Powerful Open-Source MoE LLM

What’s New: Tencent last week open-sourced its latest LLM, Hunyuan-Large, a Mixture of Experts (MoE) model featuring 389 billion total parameters and 52 billion activated parameters. This model is currently the largest open-source MoE model based on the Transformer architecture. Capable of handling up to 256K tokens, Hunyuan-Large leads across multiple benchmarks, outperforming other top-tier open models like Llama3.1 and Mixtral.

The release includes Hunyuan-A52B-Pretrain, Hunyuan-A52B-Instruct, and Hunyuan-A52B-Instruct-FP8, all of which are available on Hugging Face. Notably, the model is free for commercialization without any restrictions.

How It Works: Hunyuan-Large was pre-trained on 7 trillion tokens. This dataset comprises both natural and approximately 1.5 trillion tokens of synthetic data, which plays a crucial role in improving the model’s understanding in specialized fields such as mathematics and coding.

Key innovations of Hunyuan-Large include KV cache compression through Grouped-Query Attention (GQA) and Cross-Layer Attention (CLA), which reduce the memory needed for storing key-value pairs by up to 95%. Additionally, the recycle routing mechanism helps redistribute tokens from overloaded experts to those with available capacity, ensuring balanced workloads and minimizing information loss during training.

Hunyuan-Large also employs expert-specific learning rate scaling, applying distinct learning rates for shared and specialized experts. This approach helps preserve critical information while optimizing training, ensuring that both general and specialized capabilities are developed effectively.

Hunyuan3D-1.0: Tencent has also introduced Hunyuan3D-1.0, a model supporting text- and image-conditioned 3D generation. Evaluations show that Hunyuan3D-1.0 achieved the highest user preference across five metrics compared to other open-source 3D methods.


XPeng Flexes AI Muscles with New Chip and Humanoid Robot

What’s New: Chinese EV upstart XPeng showcased an impressive array of AI-driven technologies for mobility and robotics at its AI Day last week. Highlights included XPeng's proprietary Turing AI chip for autonomous driving and the debut of Iron, a new humanoid robot.

Tell Me More: Following in Tesla’s footsteps with in-house car chips, XPeng introduced its Turing AI chip, featuring a 40-core processor capable of managing models with up to 30 billion parameters. CEO He Xiaopeng said the chip’s computing power is equivalent to that of three Orin X, Nvidia’s last-generation automotive chips. With this innovation, XPeng plans to launch an L3+ smart driving feature — allowing the driver to take their hands off the wheel and eyes off the road under specific conditions — within 18 months. To further its ambitions in autonomous driving, XPeng also unveiled Ultra, an autonomous vehicle designed without a steering wheel, targeting the growing global robotaxi market.

Drawing comparisons to Tesla’s Optimus, XPeng also introduced Iron, an advanced humanoid AI robot with over 60 joints and 200 degrees of freedom. XPeng said the robots have already been deployed at XPeng’s factories and stores.

Other major announcements included the Kunpeng Super Electric System, which features a range extender enabling vehicles to drive over 1,400 km. The system also offers ultra-fast charging—adding 1 km of range per second and reaching 80% capacity in just 12 minutes. The AI Battery Doctor actively monitors and optimizes battery health, extending battery life.

XPeng also presented XPeng AIOS, an in-car operating system integrating GPT-4o (and likely an equivalent LLM for the mainland China market). Powered by dual Turing AI chips, AIOS learns user habits and adapts in real-time, delivering a highly interactive and personalized experience. XPeng plans to roll out a chip upgrade program for existing customers to enhance driving and cockpit capabilities.

Why It Matters: XPeng is doubling down on AI and smart driving technologies to stand out in China’s highly competitive EV market. In the first ten months of 2024, XPeng delivered over 120,000 smart EVs, largely driven by the popularity of its $16,000 MONA M03 model. Despite this momentum, the company remains behind its 2024 sales target of 200,000 units.


OpenAI VP Advocates AI Safety in China Before Departure

What’s New: Lilian Weng, OpenAI’s Vice President of Research (Safety), announced her departure from OpenAI last Friday. Her exit is the latest addition to a series of key safety researchers’ departures, raising questions about OpenAI’s commitment to safety research.

A graduate of China’s elite Peking University, Weng joined OpenAI in 2017 and played a key role in GPT-4’s development. Her famous formula, “Agent = LLM + memory + planning skills + tool use” — has been influential in shaping discussions around AI behavior. In October, she was tasked to lead OpenAI’s safety-focused preparedness team.

Just a short time ago, Weng surprisingly appeared at a science-focused event organized by Bilibili, China’s long video-streaming platform. She delivered a keynote on “AI Safety and Nurturing” — a speech that explored the concept of training AI much like we guide the next generation of humans. Weng emphasized the importance of shaping and educating AI to ensure that it serves humanity well while maintaining safety. You can watch the full speech here.

More on Speech: In her keynote, Weng spoke about AI safety and alignment, which are increasingly critical as AI becomes more advanced. She compared AI’s development to human growth: AI must be carefully guided to avoid biases and adversarial exploitation. She cited healthcare as an example, explaining how AI models can misdiagnose due to biased training data. For instance, many health datasets are male-focused, which can lead to inaccurate risk assessments for women. The need for comprehensive, unbiased data was a central point in her approach to AI safety.

Weng also explained how Reinforcement Learning from Human Feedback (RLHF) is key to refining AI behavior. She likened RLHF to training a dog: rewarding the right behavior helps the model learn desired actions. By continually receiving feedback, the AI adjusts to better align with human values and expectations. Reinforcement learning, she noted, allows for finely-tuned responses and even self-assessment, such as in models using the Constitutional AI framework where the AI reviews its own outputs for adherence to ethical standards.

Wang noted that in OpenAI’s recent Open-o1-preview model, Chain-of-Thought (CoT) reasoning was implemented to enhance reliability and resistance to jailbreak attacks. Scalable oversight, combining automated tools with human supervision, helps guide AI growth effectively. Public studies showed that AI-assisted annotators identified about 50% more issues in text summaries compared to those working without AI support.


Weekly News Roundup

  • TSMC has informed its Chinese customers that it will no longer manufacture advanced AI chips for them, effective Monday, November 11, 2024. This suspension specifically applies to AI chips using process nodes of 7 nanometers or smaller, and chips intended for high-performance computing, GPUs, and AI-related applications. (Financial Times)
  • Wang Huiwen, co-founder of Meituan, has returned to the company to lead its AI efforts. Wang will be heading the GN06 team, which is dedicated to exploring new AI-related opportunities. (SCMP)
  • Baidu is reportedly set to unveil its AI-powered smart glasses designed to compete with Meta’s popular Ray-Ban smart glasses. These new wearables are expected to be showcased at the upcoming Baidu World event in Shanghai. (Bloomberg)
  • DeepRoute.ai, a Chinese autonomous driving technology startup, has secured $100 million in new funding from an unnamed automaker to accelerate the adoption of its smart driving systems in China. (Reuters)
  • Zhipu AI, one of the leading AI startups in China, has launched a new $211 million venture capital fund aimed at investing in early-stage companies developing LLMs and other AI technologies. (SCMP)


Trending Research

Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination

  • Researchers explore the problem of data contamination in multimodal LLMs. The authors introduce a framework, MM-Detect, designed to identify contamination in these models. Their findings indicate that both open-source and proprietary MLLMs exhibit various levels of contamination, which can significantly impact their performance and generalization ability. The paper also examines whether contamination originates from the pre-training phase of the LLMs used by MLLMs or from the fine-tuning phase of MLLMs.

LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning

  • LLaMA-Berry is a new framework for enhancing the mathematical reasoning capabilities of LLMs. The framework leverages a combination of Monte Carlo Tree Search (MCTS), iterative Self-Refine, and a novel Pairwise Preference Reward Model (PPRM). PPRM is designed to evaluate the quality of solutions based on pairwise comparisons, addressing the challenges of scoring variability and non-independent distributions in mathematical reasoning tasks. This approach significantly outperforms baseline methods on various benchmarks, demonstrating its effectiveness in improving LLMs' reasoning abilities, especially on complex Olympiad-level problems. The paper also includes a detailed explanation of the Berry-Tree inference framework, a system designed to improve inference efficiency for large-scale reasoning tasks.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics