Whatever your AI app, there are TOPS for that – from Intel
At Computex, the annual PC trade show held last week in Taiwan, processor suppliers all showed off new specialized accelerators for generative AI called NPUs.
For its part, Intel unveiled an NPU capable of up to 48 trillion operations per second, or TOPS, the de facto industry measure for AI capability. That provides plenty of headroom beyond what Microsoft requires for its new on-device generative AI platform called Copilot+.
Performance for Intel’s NPU, which is built into its much-anticipated Lunar Lake laptop processor, lands smack in between specs from Qualcomm and AMD NPUs, which closely bookend Intel with peak TOPS of 45 and 50, respectively.
But Intel didn’t stop there.
Total System Performance
On the show floor, during its keynote and at a two-day intensive the company hosted for press and analysts in Taipei the prior week, Intel drove home that AI requires CPUs and graphics accelerators as well as NPUs.
“Some people say that the NPU is the only thing you need,” Intel CEO Pat Gelsinger said in his keynote. “Simply put, that’s not true.”
Gelsinger told a packed hall at Computex that only about 30 percent of its software partners are exclusively tapping the NPU for generative AI tasks. That’s why the company is focusing on overall system performance.
Lunar Lake’s overall system TOPS is the reason it drew so much attention at Computex, the annual computer business show that the PC industry transformed into its coming-out party for AI laptops this year.
For its part, Lunar Lake boasts total peak system TOPS of 120. That’s significantly higher than 75 peak TOPS for the X Elite from Qualcomm, the only one of the three processor suppliers with its latest AI PC processor now available. AMD and Intel offerings will come available in the coming months.
Recommended by LinkedIn
AMD’s 3rd-generation Ryzen AI processors aren’t directly comparable, as they appear to be designed for a different class of laptops than the slim, power-optimized systems that Intel and Qualcomm are targeting.
Optimizing AI applications
Of course, you can run any AI task on CPUs and GPUs as well as NPUs. But different jobs are best served by different members of the processor triumvirate. It makes the most sense, for example, to appoint the CPU to run simple tasks that need to be handled quickly. The GPU is best used for heavy-lifting workloads that crop up. And the highly efficient, low-power NPU is ideal for tasks that are always running in the background.
Copilot+, Microsoft’s new on-PC generative AI platform, includes features that continuously monitor activity, which is why the tech giant has focused on NPU performance. The Copilot+ platform’s Recall feature, in particular, is always tracking activity so it can instantly summon up anything you’ve done, read or viewed based on any prompt. Even a request as tangential as “pull up the hotel review I read the other day that mentioned strange odors in the hallway.”
Over time, Intel believes that developers will continue to rely on GPUs for image recognition, video processing and other more demanding AI tasks. But emerging always-on applications will slowly shift development efforts toward the energy-efficient NPU and away from the CPU.
Performance leap
Lunar Lake is ready for developers’ evolving AI focus. Its NPU is 4.4 times more powerful than the one inside Meteor Lake, and the GPU offers 3.7 times the AI performance. The Lunar Lake CPU, conversely, offers the same peak TOPS as its predecessor.
It will be interesting to benchmark actual systems’ performance once they come available. In the meantime, it seems, Lunar Lake may emerge this selling season as the most tailored laptop processor for generative AI.
Manager, Customer Success | Intelligent Automation, Cloud Computing, AI
5moMike, thanks for sharing!