Why growth in Digital Infra has only just begun...

Why growth in Digital Infra has only just begun...

100% written by human (image created by AI :))…

Back in 2020 I wrote a Linkedin post proclaiming that the worlds funds were about to realise digital infrastructure was as important as ports, mines and gas mains.  Back then only a handful of infra funds were focused on investing in digital infra business with DigitalBridge being one of the few attending conferences. Four years laters, it sometimes feels like there more bankers at data centre conferences, than operators.  Digital infrastructure has arrived.

AI is driving demand for data centres through the roof, along with GPU’s and energy demand.  Getting access to land and power is more important than getting a customer, because without the former you can’t get the later.  There is a whole series I could write on what is going to happen in the data centre, power, land, renewable energy, upcoming load-shedding, regulatory response, but that’s for another day/week/year. Leaders such as Digital Realty , Equinix , Switch , NEXTDC , AirTrunk , EdgeConneX , CyrusOne , Cyxtera Technologies DataBank are announcing expansion on expansions.  But here is the thing… it’s going to keep going nuts for quite a while yet.

The same applies to chip development.  NVIDIA is going to keep making insane profits, delivering insane growth for sometime yet.  It’s going to take a while before chips from the existing competitors such as AMD , Arm , Intel Corporation or upcoming chip owners such Google or Amazon Web Services (AWS) are going to deliver the scale of what’s needed, let alone the software platform needed to effectively juice said chipsets.  NVIDIA's Cuda and the dev community behind it is almost as important as the chip itself.  There is a whole series I could write on this and what is going to happen with chips, open-source, AI, but that’s also for another day/week/year.

The same applies to connectivity. All this data needs to be accessed, stored, processed/trained, re-stored, added to, reprocessed and distributed. Connectivity is about to enter the hyperscale world. Networks interconnecting data sources and hyperscale AI processing centres will need to be incredibly flexible, massively scalable, insanely burstable and 100% software definable. There is a whole series I could write a whole series on how hyperscale SDN in which the likes of Megaport will be fusing with long haul networks globally to make things instant and seemless almost regardless of capacity is going to reshape the global backbone as we see it today. I could also write a whole series on the criticality of long-haul terrestrial fiber and even more critical - submarine cables are to underpin the insatiable connectivity requirements to sustain this growth and why are going to be even more critical in the next 3 years, but that's also for another day/week/year.

But here is the thing people don’t seem to grasp about this demand and how it’s going to remain insatiable for the next 3-5 years; why getting access to as much compute, chips, data, scientists, energy, and compute space is vital.  I can sum it up in one word.  That word is arbitrage.

There is a fight going on right now and that fight is to be the leader in AI.  The company with the best AI platforms will win - initially in their areas of domain knowledge/application, but later - all the things.  The more data (and quality data) you can get access to, the better your model.  The more users you have the better the Reinforced Learning from Human Feedback (RLHF) and which (along with a tone of compute to retrain and update models) also aid in inference effectiveness.  This keeps evolving.  Whether it’s making bridges or making buildings, AI is already permeating our lives to make us more effective.  Now back to arbitrage.

If the effective size of the prize for having the best LLM on the planet is $10T in market capitalisation over the next 5 years, then what’s the right level of investment to get your piece of that pie?  5%?  10%?  20%?  Well that’s the arbitrage.  Right now, NVIDIA sold US$60B of chipsets last year.  That is less than 1% of the AI prize.  Let that sink in.  If the world needs 50GW of data centres over the next 3-5 years that will cost US$300B to build.  That’s about 3% of the prize.

No one is going to complain about GPU or data centre pricing for the next 3 years because the cost of not investing will be a 10-50x destruction of your market capitalisation.  If you don’t keep up, you will be left behind.  Just ask Intel…

Ben Edmond

CEO & Founder @ Connectbase | Digital Ecosystem Builder, Marketplace Maker

9mo

This is an excellent summarization of the market reality, and I am completely onboard with your position Bevan Slattery . Thanks for sharing this.

Like
Reply
Patrick Shutt

Co-Founder & CEO Resolute CS

9mo

Well written and spot on.

Like
Reply
Joel Mikkelsen

People First Sales Leader :: NetSuite Specialist :: xP&A Advocate :: Technology Enthusiast :: Provider of Dad Jokes

9mo

TSMC should probably be building more than one additional foundry in Japan, and probably start to do the same in the US. Taiwan is a pressure cooker, and who knows when it will blow up. Keen to know your thoughts on if you think there is even a small percentage of the neseccary chip-making capacity to fuel the processing requirements for infrastructure over the next 5 years…

Like
Reply
Mutaz Zaidan

Cognitive Networks Connectivity Advisor Enabling sound & sustainable Digital Transformation| Smart Cities & Digital transformation Infrastructure| Optical Networks, Edge, Core, Terrestrial & Submarine

10mo

Very insightful Bevan Slattery Thank you for sharing

Like
Reply

There will absolutely be a demand, it's going to be either bimodel or multimodal, there's economies of scale at the hypersvalers which would be super super hard to compete. During my time at google even before this AI rage, google developed the TPU chip that continues to provide an advantage in learning cost efficiencies. For general models it's going to be extremely hard to beat the scale and investment advantages of hyper scalers. There still huge opportunities for smaller players as the benefits of scale disadvantages in terms of flexibility and agility. Smaller operates can iterate quicker, take advantage of market scale and position for the custom workloads. Net net is a huge opportunity, especially for the power companies

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics