The Future of Work and AI: Innovating at the Edge

The Future of Work and AI: Innovating at the Edge

Last Friday was one of those rare days in San Francisco when the blanket of fog lifts, the bay wind calms down, and the temperature rises above 75F / 24C. I decided quickly: I would walk to my lunch meeting with Jean-Noel Moyne , our Field CTO at Synadia , in Hayes Valley, get my steps in for the day, and catch up on AI podcasts. As I climbed the Pacific Heights hill and crossed the city’s transportation arteries like California St and Geary St, the long line of cars stuck at traffic lights saddened me. So much prime time wasted, so much pollution from mostly single-occupant vehicles, and a lot of anxiety and stress from being out of control of the ETA.

Tech roles make up 11.6% of total Bay Area employment, and most can be effectively done remotely in 2024. Yet, our traffic keeps getting worse because most tech employers want their employees in the office, some even for five days a week.

GPT-4o: "Rush hour in San Francisco"

The future of tech work and the future of AI have a lot in common: much more innovation happens at the edge than most people realize.

The edge is the new frontier of AI innovation

Applying AI at the edge means using existing data in entirely new ways. Information initially intended for quality assurance and documentation can now be used for dynamic decision-making in ongoing business activities. Edge computing has two primary roles to play in the AI economy:

  • To collect, process, and distribute data from all points of generation, such as factory floors, retail POS locations, or connected vehicles. This data is then used for AI model training in the cloud.
  • To perform AI model inferencing as close as possible to where the data is created: at the edge.

While most AI news focuses on the latest and greatest LLMs in the cloud, my colleagues and I at Synadia predict that the edge is the new frontier of AI innovation. It will require us to rethink connectivity at the edge of the network to efficiently support two use cases:

  • Optimal for high-value, low-volume data: Transmit mission-critical events inferred by AI models running locally at the edge, which process raw data in real-time. Examples include an AI model identifying potential emergencies from video feeds, such as a fire or accident, or detecting from sensor data when a machine on a factory floor or at a remote site is showing signs of being out of specification or about to break and need service soon.
  • Optimal for low-value, high-volume data: Process large volumes of data locally at the source, such as feeding video frames or sensor data to models running locally. Only share already inferred events to the cloud for alerts.

Designing a better AI cloud-edge system for everyone

By reserving precious network bandwidth for the most mission-critical data, we improve the overall AI cloud-edge system for everyone. This bandwidth is often limited and intermittent in harsh conditions at the edge, such as with sensors in agriculture impacted by extreme weather. Keeping non-important data at its generation point also helps reduce network costs at the cloud.

Similarly, by allowing all tech workers who can (and prefer) to work from home to do so, we’d free up space on the freeways for the essential workers who need to commute.

GPT-4o: "Off peak hour in San Francisco"

How can we solve both problems then? The future of work is a topic for another occasion, but solving Gen AI at the edge is simpler: use the NATS-based tech stack. NATS.io simplifies connectivity, data management, and compute functionalities essential for today’s distributed, edge-focused AI applications.

That’s exactly what Jean-Noel Moyne and I discussed over lunch in San Francisco’s lovely Hayes Valley neighborhood. Stay tuned for our upcoming article on the topic!

➡ In the meantime, read Jean-Noel’s excellent post: Why NATS.io JetStream is so well suited to AI at the edge.

➡ You can also read a recap of my conversation with Derek Collison where we dove into How to get your apps ready for life at the edge

#AI #EdgeComputing #NATS #FutureOfWork #TechInnovation #Synadia


Simone Morellato

Sr. Director of Marketing | Marketer, Builder, Innovator | AI Trailblazer, Kubernetes Enthusiast | Help Tech Companies Articulate Product Value and Differentiate it in the Market

5mo

Excellent insights, Justyna! The distinction between high-value, low-volume data and high-volume, low-value data is particularly relevant in the domain of edge computing. It's encouraging to observe how AI can enhance real-time decision-making and reduce bandwidth consumption. This concept takes me back to our days at Riverbed Technology and their memorable slogan, "Think Fast." Additionally, the AI industry is progressing towards smaller, more efficient models ideal for edge computing. For instance, OpenAI recently released GPT-4o Mini. This new model is not only more powerful than GPT-3.5 but also 60% more cost-effective.  Such advancements are promising for the future of AI and its applications.

Hello, my name is ollie stevenson i'm a long time friend of Derek from earlier days.. Have some ideas to go along with your freeing up the roads and the streets. I would like t share them with you if you have time. Let me know and i will write them to you in this format, Thanx in advance Ollie

Justina, nice to see you connected eith AI and with different aproach. Regards

Rachel Simms

Business Development Manager, Virtual Subsidiary PEO and Accounting

5mo

Great article!

Mauro Carobene

Head of Customer Interactions Suite Tata Communications - CEO at Kaleyra Group - Connecting enterprises with their own customers - Board Member & Advisory Board member

5mo

I don't want to comment on the importance of moving AI processing to the edge.... My comment is much more on future of work. Getting proper cappuccino and San Pellegrino water face to face make the overall experience of the meeting much better.... Virtual way of working will never be able to replace the human touch that is needed in every interaction. The question is how often do we need to interact with a proper cappuccino to make the job stimulating and the experience reach enough.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics