How our lives depend on computations.
How fast is the fast?
How big is the difference between 5 seconds and a second? Or 300ms and 40ms?
I obviously (and provocatively) ask stupid questions, as these questions highly depend on the context.
5 seconds doesn't play a big role in something like synchronizing a web version of a document with your desktop (but can be annoying).
Yet, 5 seconds is a huge latency and a critical bottleneck in something like real-time natural language processing or computer vision on mobile devices.
300 ms doesn't sound much either. Unless it's a system latency in network-assisted pedestrian protection of an autonomous vehicle. According to the 5GCAR standard, the number must be at least 10 times lower.
Each of you who reads the lines can relate to the critical performance bottlenecks in your own industries. Easily these are computational latencies that hurt business processes, customer experience, safety, and more.
What's cooking out there?
In the financial industry, we depend on high-frequency trading, evaluation risks and derivatives speeds, and scalability of data processing in real-time.
Manufacturing can heavily rely on a need to rapidly conduct an automated optical inspection, or analyze significant pools of data for predictive maintenance.
Healthcare relies on computer-aided tools and sensors in CTs, MRIs, and diagnostic systems aiding diagnosis and treatment decision-making in real-time.
Research centers that process significant amounts of data on CPUs, and oftentimes get their researches stretched due to enormous training duration for models or processing times on datasets.
While reading this, each of you has already thought of another couple of applications, where we struggle from performance bottlenecks. They are everywhere, and that's normal. Our rapid advancements in technology and the adoption of computer-aiding systems is natural progress for businesses and society. Bottlenecks occur, and understanding how to address them is the key.
While CPUs get better each year, in the majority of cases these are still less than a dozen processing cores aimed for general use. That's where graphical processing units (GPUs) and field-programmable gate arrays (FPGA) come as exceptional computational accelerators.
Boost performance + Save energy.
It was quite hard to believe when I saw the results for the first time. Like getting 3x performance and 1.6x less power consumption in specific AI/ML applications caught me off guard. While adding more CPU cores gives marginal results, utilizing GPU and FPGA accelerators makes a change in orders of magnitude.
Anyway, I'm quite an ambassador of high-performance computing and hardware acceleration right now. Seeing numbers of applications across industries makes me quite thrilled about how potentially seamless computing in devices and systems around us can be. If you're curious how those things work, or the topic resonates with you - catch up with more stuff here R&D | Things.
The dialogue.
So now I'm curious, do these challenges resonate for you as they do to me? Are there any computation performance bottlenecks in your organization that are critical to your business? And most importantly, how do you technically battle them?
Stay safe, and keep it up.
Sincerely, Andrii.
Business Development Specialist at Base Hands
9mo👍
Head of Digital Marketing & MarTech at SoftServe 🌀 Making digital strategy & tactics work hand in hand
3yGreat and easy read (despite the complexity of the topic)
G2M Strategy, Partnerships, Revenue & Relations
3yThank you for sharing Andrii there is definitely a lot to consider in future planning and investment