The Biggest Limitation of AI in 2024
AI systems require training on data, then use this knowledge to infer or make predictions based on new inputs. Inference costs are quickly rising.

The Biggest Limitation of AI in 2024

I remember sitting down with elder members of my family at the dinner table and relaying their prompts. Though they had heard all about it, they hadn't taken the time to get first hand experience. Their collective first impression was "Wow... that's scary, it's dangerous." I felt it was an extremely understandable initial reaction. The surprising thing was that even those that were not tech savvy understood the implications ChatGPT will have on this world.

I remember a few months before that when I myself first came across it. The things that initially struck me weren't the capabilities or the possibilities, but the simplicity of the UI. It was in my mind one of the strongest examples of a complex, powerful technology made extremely attainable to the non-technical user. AI and LLMs have existed for a long time, all the way back in previous eras of technology. Although GPT-3 may have been the best model from OpenAI, in my mind the accuracy or capabilities of the model was not the reason it went viral.

The technical barrier to entry was gone, it was finally accessible to the masses.

It was controversial at first, and is controversial now. It's nearly impossible to talk about ChatGPT and the implications of AI without very quickly finding yourself discussing politics, ethics, or other sensitive topics. The topic itself can have a tendency to make people uncomfortable, but the rate of change is also accelerating (it's been too long, does that make it the second derivative?).

"Well, if it was impressive when it was first released in 2022 and things are accelerating, shouldn't we get to AGI in no time?" I'd say not so fast. Forget AGI, even for the foreseeable future (in 2024), AI is almost entirely unusable to a large part of the data/engineering world due to it's lack of real time stream processing ability. To put another way, in AWS terms, there are use cases where you would use AWS Redshift, and other use cases where you would use AWS OpenSearch. I believe AI is a lot closer to being widely adopted in [Redshift / BI / Data Analytics] than in [OpenSearch / Monitoring / Observability].

There are an extremely large number of problems that need to be solved by analyzing machine data in real time. AI in 2024 does not have these capabilities.

To illustrate, Bill Gates has an iconic image sitting on a very tall stack of paper to show the amount of data that could be stored on a CD-ROM. This was 30 years ago back in 1994:

How does that 700MB CD-ROM compare to the amount of data being created today?

Undeniably brilliant marketing, but this is a great example of how fast technology has changed in 30 years. Today, throughout Edge Delta's customers and the engineering teams using the platform, even just one of their systems can emit machine data at a rate of 1,000x the amount of data in the picture, every second. It is obvious that no human could read the amount of data Bill is sitting on, much less 1000x that much per second.

Yet, the interesting thing is that AI today can't either.

How is there so much data? Well, logs, metrics, and traces are essential types of machine data that provide valuable insights into the behavior, performance, and health of these mission critical systems. It used to be called monitoring, now it's also called observability (and the difference in those two is consistently argued). By collecting, analyzing, and monitoring these types of data sets, organizations can optimize performance, gain actionable insights, troubleshoot issues, and maintain the availability and health of their applications and infrastructure.

So AI can't handle Observability data?

When it comes to AI, at a high level there are two big concepts that make it tick: training and inference.

First is training. AI is similar to humans in a way in that it can constantly be learning and adapting, although they are of course orders of magnitude faster at this than humans. As you feed it more data, you're training it, and with each round of practice, AI gets better at spotting patterns, understanding things, and making predictions.

Then there's inference – that's when AI puts what it's learned into action. It takes the training and knowledge it's gained and uses it to calculate and turn inputs or prompts into an output that may be valuable to the prompter (in this case, engineer).

It is incredibly hard to build a system that can both train and infer on streaming data at extremely high volumes and throughput. Even if you took a pre-trained model, it's still not feasible.

So... AI can't... handle Observability data.

Not raw, not at high throughput, not in real-time, not at any sizable scale. Even when the amount of compute available eventually makes it technically feasible, we will still be extremely far from being able to derive enough value from that data to sustain the process within a viable business model.

But, there is an opportunity to work within the current and future limitations of AI, and that is in pre-processing and curating data prior to feeding that into these models. Compute is increasing, maximum token capacities are increasing, and by combining that with stream processing abilities like observability pipelines, it starts to get interesting.

What's next for AI within Observability?

Much of the community believes in a future where artificial intelligence makes observability easy and self-sustaining. A lot of teams are experimenting with various functionality in the space, but it's still quite immature. The best direct example of innovation in this space would probably be functionality that was released earlier this year, OnCall AI

OnCall AI summarizes anomalies in conversational text and provides recommendations on how to remediate them.

Some platforms can offer distributed machine learning capabilities within observability pipelines to identify anomalies, then use the backend for correlation to give the users context into the issue. OnCall AI streamlines this process further. When Edge Delta identifies an anomaly, OnCall AI:

  • Analyzes the contents of the logs contributing to the anomaly
  • Communicates the severity of the issue and what it’s impacting
  • Summarizes the negative behavior in conversational text
  • Provides a recommendation on how to resolve the issue

In other words, OnCall AI helps move you along the troubleshooting process faster, and there are quite a few happy DevOps and SRE teams that are getting value out of it.

“Edge Delta helped us find things hours faster than we would have. It allows our developers to see – for the first time – what was making the most logs, what was giving the most errors. When things did go bump in the night, what changed specifically.”

-Justin Head, VP of DevOps at Super League

To sum it all up

The integration of AI technologies within observability practices represents a potential for significant incremental value for DevOps engineers seeking to enhance the monitoring, troubleshooting, and optimization of these mission critical services. However, as used in the Bill Gates illustration and example above, the deployment and a constant dependence on AI for very large datasets in real-time observability poses significant issues. Managing the sheer volume, velocity, and variety of data generated by modern distributed systems requires robust infrastructure, scalable algorithms, and efficient data processing pipelines. Ensuring the accuracy, reliability, and timeliness of AI-driven insights amidst the ever-changing nature of real-time data streams requires thoughtfulness.

It is exciting!

The journey towards leveraging AI for observability on massive scales may be challenging, but for those of us up to the task, it also presents unique opportunities for innovation, efficiency gains, and proactive problem-solving in the world of DevOps.

Aziz Karaburun

Digital Transformation Leader, Yeditek • AI & Computer Vision Expert • Smart Manufacturing

5mo

p ph npnnv

Like
Reply

Ozan, thanks for highlighting this fascinating area! How do you think Causal AI will shape the future of data-driven decision-making in our industry?

Like
Reply
Andrew Mallaband

Helping Tech Leaders & Innovators To Achieve Exceptional Results

9mo

What about Causal AI? Understanding causality in observability data is critical to understand the root cause and effect. Unless you understand these connections between problems and symptoms you end up with a lot of data that is not contextualised. We have lots of data and correlation today in Oberservability but very few companies are even attempting to address this problem.

Like
Reply
Aaron "Checo" Pacheco

Innovative Product Leader | Observability AI | Data Analytics | Security & Compliance

9mo

Such a great article! This is exactly what I’ve been trying to communicate to my teams and other folks in the Observability space.

Like
Reply

Absolutely fascinating insights! While AI and ML continue to astound in various fields, I couldn't agree more that the landscape shifts when it comes to DevOps, SRE, and observability. In these domains, ensuring seamless integration and real-time adaptability amid evolving tech landscapes is paramount. Can't wait to delve into your perspective on the biggest limitation of AI in 2024—it's crucial for shaping our strategies effectively!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics