Technology 2020: all at the edge
2019 has been a year of de-globalization, post-truth politics, student unrest, #metoo impact, US President’s impeachment, UK political roller-coaster, and visible climate change impact. These broad trends impact technical directions. De-globalization has resulted in increased regulations for data sovereignty. Post-truth politics demand technology solutions to detect fake news, images, and videos. Governments have used face-detection technologies to identify persons of interest. Climate change has increased the consciousness of carbon footprint of the digital age.
From a purely technology standpoint the biggest advances of 2019 were at the edge. Edge, here, refers to the ability to provide services closest to the source of data including mobile phones, ground cameras, aerial cameras, and industrial sensors. Neurocomputing chips have joined the CPU and GPU to power mobile phone edge compute. 5G promises to enable low-latency device-to-device intelligent real-time collaboration at the edge. Technologies, at the edge, providing faster, cheaper, and better services along with newer ways of extending human life and the comfort of living are to be expected in 2020.
Now, here are three Edge AI technologies we would like to see in 2020:
Edge AI-Based Augmented Reality for Enhanced Data Visualization in Healthcare: Three-dimensional (3D) visualization and the ability to work interactively with image data are a current reality. Soon, the possibility is for surgeries to be performed on images of the patient body, with detailed representation of the organ being operated. The coordinates of the surgical procedure will then be transformed into a robotic surgeon who could then per-form the actual surgery aided by live visualization and registration of the organ. There could now be surgery templates that would be used as starting points and modified for each patient data. Robots could perform the surgery much faster than human specialists; hence more people could be treated.
Edge-AI Based Immersive Entertainment with Intelligent and Interactive Artificial Cast: Microsoft Comic Chat, later known as Microsoft Chat, was released in 1996. Online conversations were recorded as a comic strip that unfolded live as the chat progressed. Microsoft Comic Chat was, sadly, discontinued in 2001. The core concept, however, has come a long way today in other products such as Avatar based games and chats. Recent developments such as edge AI enable much more creativity to be infused into immersion applications. Immersive entertainment, as an example, allows individuals to become part of a story plot along with other real and computer-generated individuals. The plot itself would be dynamic and adapt itself depending on the real-world characters that take part and the roles they play. Bandersnatch the interactive episode in the popular Black Mirror series released by Netflix in December 2018 leaves a lot to be desired. The only interactivity is in choosing the paths in the story-line forks inserted in the narrative. And then, many of the forks are dead ends.
Ability to use GANs to morph character faces to match that of consumers is a now a reality and should be a feature of OTT content and gaming soon.
Edge-AI Based and Chaos-Based Techniques to Predict Natural Disasters and Other Natural Patterns: Natural disasters have precursors and have certain patterns which need massive computing resources to monitor, track and create warnings. Chaos theory describes systems whose state evolves with time exhibiting dynamics that are particularly sensitive to initial conditions and can exhibit exponential growth of initial perturbations. Weather is considered a chaotic system. Grid computing (utilizing the power of millions of networked computers for intensive computations) combined with advances in artificial intelligence techniques and Chaos theory will make it possible to provide natural disaster warnings way ahead of the occurrence with very low false positives. The combination of edge computing, AI, and Chaos theory will be able to facilitate speedy actions as well.
And, here are two Edge AI technologies that we at Myelin will deliver in 2020:
Edge-AI for OTT Entertainment: Global OTT video on demand revenues are $50B in 2019 and growing. Global OTT audio revenues are $60B and growing. Quality of experience, personalization including recommendation engine, and efficient content delivery constitute a $5B opportunity by 2024 in our estimate. Live commerce, UGC, education, VR, AR, and game streaming are nascent but fast growing as additional opportunities. Video super-resolution at the edge enables zero-delay and zero re-buffer OTT services with the highest quality of experience on the device, at a fractional cost. 75% of data traffic, today is video, and this is set to increase with 5G. Technologies such as video super-resolution at the edge will accelerate the penetration of video streaming while enhancing the viewer experience. The Myelin super-resolution solution performs sub-pixel convolution and further processing, real-time and at the edge. We work with hardware partners to access chipset features that enable deployment of streaming content upscaled to 4K resolution at 30 fps and even 60 fps, thus enhancing both video playback and gaming experience.
Edge-AI for Wellness: Recent scientific studies have established chronic inflammation as an indicator of general wellness and as a risk indicator for chronic diseases. It is now accepted that managing aging and disease is aided by measuring and managing inflammation. Currently available techniques for identifying inflammation and/or stress call for the use of markers that are determined via invasive tests that entail obtaining samples of blood, saliva, urine, and/or sweat from the user. Some examples of the markers include C-reactive protein (CRP), cortisol, serum proteins, and the like. At Myelin Foundry, we are conducting trials and developing systems to noninvasively predict the inflammation and stress markers. We utilize deep neural networks to perform sensor data fusion of structured and unstructured user data. Structured data includes bio-impedance, heart-rate variability, age, and gender. Unstructured data includes pictures of the eye, nails, tongue; and voice samples. Our system has a wearable form-factor and hence facilitates continuous non-invasive measurement. Initial tests and statistical analysis show that the proposed approach is feasible.
With the possibilities and realities discussed in this article, it should raise concerns that AI is now set to go beyond the human mandate. So, let me address that concern. Humans, in my viewpoint, have a complex analog intelligence coded into the connectivity and the chemical interaction of the synapses of our neural networks. Importantly, humans have an even more complex digital intelligence coded in their genome, which only now we have begun to understand. We are attempting to model the analog intelligence with the multi-layer perceptron and other versions of deep artificial neural networks. We have made decent progress in translating a limited understanding of human analog intelligence to machine intelligence. Digital intelligence, also, we have attempted to model with approaches such as genetic algorithms. However, these approaches, while based on evolutionary principles, rely on very low understanding of the human digital intelligence. On top of human analog and digital intelligence is the microbiome-based intelligence derived from the hosts of bacterial life humans sponsor on their selves. Our understanding of microbiome-based intelligence is embryonic. In summary, our ability to comprehend human intelligence is just a journey well begun. AI should not be feared as a human intelligence replacement and instead should be considered yet another technological tool with remarkable capability to automate and aid decisions. There always have been specific tasks that computers have performed better than humans, and that list will only increase where data sets are large and have high dimensionality. This is not be confused with an ability to extrapolate beyond the trained data or beyond the trained outcomes to become self-directed.
Also, a few words on climate change. Our planet exists in a very delicate balance. This balance is achieved through each constituent of the planet playing just its role in the balance. As humans, we have already strayed far from our narrative and have permanently damaged nature and even eliminated species. We need to evolve to where technology provides for human health and comfort with the smallest environmental footprint while sharing the planet with other life forms today, and protecting what has been banked rightfully for the future. Let us raise a voice for the planet, for frugality of consumption, and reserve the abundance mindset for giving rather than taking.
#Technology transfer through the Technologies #IIF-That's cool
4y✍️
I help ENVISION a digital future, & ACCELERATE transformation | Workshops, coaching, tools and frameworks | Keynote speaker | Former R&D executive | Former Professor | Author |
4yWhen leaders like you, think like this, there is a hope for the human life on planet.
Senior Director HR India at Johnson Controls
4yI especially look forward to the impact Myelin can make on healthcare. Great article Gopi.
Director, Data Science, ExxonMobil
4ySuch an insightful article, Gopichand! Love your thoughts around understanding human intelligence.
Creating regenerative, decentralized industries to unlock humanity. Coined the term "systems change venture." Media and speaking, 10m viewers.
4yAll interesting and impactful innovations! Thank you for sharing. I would love also your thoughts on the ethical AI movement and what are the major aspects (e.g. security, privacy, not selling data to governments) we must stay vigilant about as AI tech progresses more quickly than many (at least here in the US) are comfortable with.