Copilot AI PC’s: not quite there yet.
Photo credit. Microsoft.com

Copilot AI PC’s: not quite there yet.

Part of my job for the past 25 years involves testing and be an early adopter of the latest IT technologies. I absolutely love it and I must admit that I am super thrilled about the particular moment the IT industry is living, led by AI. We are witnessing a paradigm shift quite like the one we experienced back in the day with the launch of Windows 95. It is a clear leap forward and we all know the PC Industry urgently needed a wake-up call to boost the sales after several exercises wandering in the wilderness with disappointing growth rates. But I am afraid there was some rush on pushing consumer AI so soon for that reason. Let me explain.

In my latest trips to the United States this year, I had the chance to attend several PC conventions, such as the HP Amplify in Las Vegas or some other meetings here and there in the Bay Area, and witness 1st hand the revolution to come. I paid special attention to the in-person speeches of Jensen Huang (Nvidia), Cristiano Amon (Qualcomm), Enrique Lores (HP Inc), Pat Gelsinger (Intel), Lisa Su (AMD) or Satya Nadella (Microsoft) to mention a few. The excitement was there, and one could feel that it was legit with truly solid facts and, for the first time in years, feeling illusion for the real change in the PC Industry new horizon.

Enrique Lores (HP Inc.) with Jensen Huang (Nvidia)@ the HP Amplify Conference in Las Vegas (Photo credit: Pedro de Castro)

In words of Jensen Huang, computer has been reinvented after Windows 95, with marginal cost of computing reducing steadily during these last 30 years as we all know. Now, a computer can potentially understand and interpret the meaning of data, not just the patterns, for the 1st time.  On top of that, one of the biggest paradigm shifts is that AI will go local, on device, instead of using the massive amount of resources of the cloud or edge: this should avoid bottlenecks and reduce the impact on the environment along with a most efficient and personal use of this tool.

As I wrote in this 2016 post, the Wintel Hegemony could be coming to an end, and looks really surprising to me that it caught of guard the leader of the CPU market, Intel, which was not clearly ready for what was coming, especially in regards to NPU implementation and AI capabilities for their Core and Xeon families, while lagging behind at integration level, especially below 3nm where TSMC has total dominance. As a long-lasting huge Intel customer (and fan) since my first 386DX computer, I am really sad about the big crisis they are facing right now which involves thousands of layoffs and business and foundries restructuration mainly due to their sadly well-known hardware and instability issues affecting their high-end Core 13th & 14th Raptor Lake CPUs, while operating at performance voltages. A situation that is helping their competition, namely AMD and Qualcomm, to gain market share rapidly.

Qualcomm’s CEO presented undoubtfully great news on a real implementation using (finally!) ARM architecture with their own solution based on Snapdragon, ready to be shipped this summer and fully compatible with the Copilot+ stack, presented by Microsoft although it is based on ChatGPT. This point worries me: looks like regarding Generative Language AI, neither Microsoft nor Apple were ready with their own solutions. To me, it is red flag as well in terms of privacy and bias, with OpenAI behind. In any case, Christiano Amon, with permission of Jensen Huang on datacenters and GPU leadership, took the lead on this and already shipped their “Built for AI” CPU which includes integrated NPU (The Hexagon) capable of running generative models on-device, meeting the requirements of the Copilot+ stack, with minimum 16 GB of RAM and 45 TOPS, reference set to use local generative AI. For all of us working with Apple products since long ago (Macbook, iPad, iPhone), our devices already integrated Neural Engines, but it is the 1st time that a laptop PC includes one like Qualcomm’s.


Qualcomm's CEO, Cristiano Amon, during his speech in Las Vegas early this year (Photo credit: Pedro de Castro)

I had the opportunity to have on my hands a Snapdragon X Elite from HP, an Omnibook Ultra with 16 GB RAM DDR5, 1 TB SSD, Qualcomm Adreno GPU, Windows 11 Home with a nice AI sticker on it and a dedicated keyboard key. Testing beyond pure horsepower performance – I found it very similar to my M2 Macbook Pro or my i7 desktop PC for regular tasks – I wanted to test the Copilot Stack and the NPU capabilities along with other relevant performance factors, like battery life. In that regard, Qualcomm is very efficient, and battery works great delivering outstanding performance. Computer looks very solid with a stylish sleek design, very comfortable to work with. But the 1st disappointment came quite fast as it looks like Copilot is not quite finished or totally implemented yet. It mainly operates at cloud level and some of the features introduced in their presentations are not ready, like the Recall which would be a game-changer in my opinion.

In any case, I believe that at this early stage, Copilot+ somehow relies on the fact that 3rd party developers will take use of it to get advantage on NPU on-device processing. So, you are getting an AI PC based on a promise, not on what can be done today. It is cool to get some images of your webcam with effects in Windows Studio using only your NPU or create fancy images of a cat in disguise on a rocket or make some texts to help you out with your daily tasks. But at this point, the implementation, in my humble opinion, is very weak and dull. Also, I wanted to test one of the Copilot+ top features, Recall – which should allow you to go back to previous states of your work with plain language queries offline – but it was not available yet, looks like due to privacy and security concerns… Bummer.

I finally have my doubts, probably due to my own ignorance, on running and training large models that go beyond ChatGPT capabilities. I have been working recently on predictive models that require +200 GB of RAM and way more processing capability. I am totally curious about how this will be implemented locally in a consumer or SMB laptop. I am aware that we are only at a very early stage and the potential is HUGE, but I clearly see, based on my past experience with disruptive changes in the IT industry,  that the adaptation and implementation curve for the local AI consumer and SMB PCs will happen only beyond 2 or 3 years from now.

IMHO the big revolution these days will happen in the Datacenters and HPC workstations, as the vast majority of the current infrastructure is becoming rapidly obsolete with AI and Accelerated Computing. Here exists a big playground for Nvidia to keep leading this paradigm shift with Grace Hoper and their new Blackwell family, Accelerated Computing and tensor core GPUs. AMD with their super performing and amazing Threadripper CPUs has a lot to say in that regard too.

 

Pedro de Castro,

July 2024, San Francisco.

To view or add a comment, sign in

More articles by Pedro de Castro

Insights from the community

Others also viewed

Explore topics