Enterprise generative AI use cases, applications about to surge

Enterprise generative AI use cases, applications about to surge

If 2023 was the year of generative AI pilots, 2024 will be about moving to production and 2025 will likely be warp speed. Why? The generative AI building blocks are falling into place.

In recent weeks, three mileposts have highlighted where enterprise generative AI was headed.

  • Nvidia GTC highlighted how the software building blocks for generative AI are in place. The company launched Blackwell GPUs, but Nvidia Inference Microservices (NIMs) will ultimately be just as important. NIMs are pre-trained AI models packaged and optimized to run across the CUDA installed base.
  • SAP, ServiceNow, Cohesity, CrowdStrike, Snowflake, NetApp, Dell, Adobe, and a bevy of others are rallying behind NIMs.
  • Nvidia's AI Enterprise 5.0, which will include NIMs and capabilities that will speed up development, enable private LLMs and create co-pilots and generative AI applications quickly with API calls.
  • Palantir held its AIPCon meetup and customers outlined how they delivered value quickly. The use cases ranged from supply chain to defense to logistics to smarter workflows among field workers. Palantir has been using its AI Platform (AIP) to land, generate value, and then expand.
  • C3 AI held its Transform event where Baker Hughes highlighted how they used C3 AI's platform to optimize sourcing and inventory along with value delivered to the US Department of Defense, Con Edison, GSK, and others. C3 AI's formula rhymes with Palantir's approach.

Taken as a whole, the generative AI use cases today are delivering value, but won't set the technology world on its ear. Frankly, some of the use cases sit at the intersection of AI, process mining, and data science and you'd be hard-pressed to declare the implementations as solely artificial intelligence.

Jensen Huang's keynote highlighted where generative AI use cases are going to go.

First, the sheer pull of Nvidia's ecosystem--AWS, Microsoft Azure, Google Cloud Platform, Oracle Cloud Infrastructure, data platforms such as Databricks and Snowflake and enterprise software vendors--will put NIMs on the map. Enterprise AI 5.0 will be ubiquitous.

And priced at $4,500 per GPU for AI Enterprise, there's a big market opportunity for Nvidia, but nothing that breaks the bank. The cash register for Nvidia is still the GPU. That said, the software math for Nvidia is compelling--especially if Nvidia has 1 million GPUs in the field attached to AI Enterprise.

Nvidia GTC 2024 Is The Davos of AI | Will AI Force Centralized Scarcity Or Create Freedom With Decentralized Abundance? | AI is Changing Cloud Workloads, Here's How CIOs Can Prepare

Simply put, Nvidia is flooding the zone for generative AI use cases. Speaking to industry analysts, Huang was asked about enterprise use cases. He said:

"We have two avenues to take AI into enterprise. One avenue is people build up applications in the IT department. We have business application developers writing applications for forecasting and supply chain management. We have to create these AI modules and AI libraries for them. Business applications are just AI applications. Somebody's going to go off and build.

The other avenue is through the enterprise IT platforms, and I think that they're all sitting on a goldmine. They created tools and you can now create AI copilots to go use those tools. You're gonna have SAP create copilots and they're gonna get better and better. Instead of instead of hiring 100 business application developers, you have 100 and another 500 that are APIs."

Platforms appear to be the primary vendor goal at the moment. Palantir said on its fourth quarter earnings conference call that the company has covered nearly 200 use cases coming from its AIP Bootcamps. Palantir CTO Shyam Sankar said AIP is enabling the company "to integrate so many types of new data, video conferences, incident response calls, Slack rooms, PDFs, images, video, audio, and exploit them through the power of LLMs and ontology."

Palantir posts strong Q4, sees enterprise traction in US | Palantir's commercial business scales with help of AI boot camps

Sankar said the real data that defines a process is in the conversations than the enterprise system. "What's in the enterprise process system is a lousy latent representation of this reality," said Sankar. "With AI and LLMs, you can't think your way through it. You have to get your hands dirty and work in anger to get use cases into production. In AIP, we have built a platform to deliver proof, not just proofs of concept, to our customers."

C3 AI CEO Tom Siebel said during one of his Transform 2024 talks that if you fast forward three years, you'll find that the entire enterprise application stack will be transformed. AI applications will be predictive and prescriptive and save billions of dollars.

"Let's fast forward three years March 2027. No CEO in the world will be able to withstand a board meeting where he or she was standing up without reporting what customer churn was, what device failure was, and the level of fraud. When the tools are in place to prevent the failures, prevent the customer churn and make sure you can deliver the products on time it's big," said Siebel.

C3 AI as of Transform 2024 has deployed more than 47 use cases in generative AI across multiple industries.

Related: How Baker Hughes used AI, LLMs for ESG materiality assessments | C3 AI launches domain-specific generative AI models, targets industries | C3.ai's next move: Convert generative AI pilots to production deals

The bet here is that we're going to see a lot more enterprise use cases soon, but the real business value will be at the intersection of generative AI, process transformation, automation, scale and speed. It's also worth noting that enterprises are allocating money to generative AI. Deloitte's first quarter CFO Signals survey found that 64% of North American CIOs are looking to adopt generative AI with a focus on IT, business operations, customer service, finance and sales and marketing.

.Lumen's headset for the visually impaired

The best use cases are sometimes so obvious. During Nvidia GTC 2024, Cornel Amariei, CEO of .Lumen walked through a headset for the visually impaired that will scale better than a guide dog using sensors and AI technologies that are used in cars.

"We have today over 300 million visually impaired people, and this number is increasing greatly. But if you check what solutions are out there for them, there are only two solutions for their mobility, and they're 1,000s of years old--a guide dog and the white cane," explained Amariei.

Amariei explained how .Lumen's headset includes spatial navigation AI to understand the pedestrian world the same way a self-driving car would. The headset also includes a non-visual feedback interface that uses haptics to guide the blind.

"Rather than pulling your hand as a guide note, we actually pull your head," he explained. "We tested with over 300 blind individuals, and I would argue it's actually more intuitive than a guide dog pulling your hand. It's all possible because of the latest in self-driving, robotics and artificial intelligence powered by Nvidia."

The technology behind the headset includes two RGB cameras, two depth cameras, infrared sensors, and an inertial measurement unit with the ability to use GPS in some use cases. The data is processed in the headset to run machine learning models and computer vision flows.

Amariei added that .Lumen is optimizing for battery life and other features. He said that the headset can be used with a white cane or guide dog as well as by itself. Approval from the Food and Drug Administration is expected next year, and the device will be available in the second half of 2024.


From our underwriter:

  • Hitachi Vantara said it is collaborating with Nvidia to develop a portfolio called Hitachi iQ, which will be aimed at industry-specific AI use cases. Hitachi iQ systems will be built on Nvidia technology and undergo Nvidia DGX BasePOD certification, feature Nvidia DGX H100 systems and Nvidia AI Enterprise. These systems will also include Hitachi Content Software for File and other storage technologies. Hitachi Vantara will also form a Generative AI Center of Excellence as part of a broader relationship with Nvidia. See the news release and Hitachi iQ overview


NOTEBOOK

🫱🏽🫲🏾 ServiceNow had a busy week with two tuck-in acquisitions and the latest release of its Now Platform. Dion Hinchcliffe covered ServiceNow’s Strategic Portfolio Management updates.

⚙️ Dell Technologies is planning an armada of AI systems with Nvidia’s reference architecture and latest processors. HPE also outlined systems.

🥁Cisco closed its $28 billion purchase of Splunk and outlined what will be a steady drumbeat of product integrations in the months ahead.

🫡 Microsoft is shoring up its consumer Copilot efforts with the addition of Mustafa Suleyman and Karén Simonyan to lead a new group called Microsoft AI. Suleyman and Simonyan were two of three co-founders of Inflection.ai. Suleyman was also Co-founder of Google's DeepMind.


Like getting our weekly round-up? Subscribe to Constellation Insights for consistent news and analysis on technology trends, earnings, enterprise updates and more.

Want to work with Constellation analysts? Our Sales team would love to get in touch!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics