Pivotal new paradigm in Foundation Models that makes GenAI explainable, energy efficient, and powerful on edge devices: Welcome to Liquid AI, the MIT startup that has kicked the bucket for all LLMs where it comes to versatility and safe applicability to industry.
In May this year I spoke at Capgemini Spark24 in Paris. The last speaker, a young MIT scientist, Ramin Hasani, approached the stage to present their ground-breaking framework for developing and deploying Foundational Models for Edge Computing. The sheer elegance of their model training approaches, and the astonishing capabilities of this FM to be capable of performing on edge, sounded like the stuff of dreams.
The biggest highlight for me, based on the work that I do in Cloud and Cyber-security, is how the Liquid AI’s foundation models (LFMs) solve one biggest headaches when trying to deploy GenAI in business environments: latency of the data traffic exchange, risks of cyber security breaches when taking proprietary data out of premises, and the need to be on online for all transactions. Their LFMs keep the data on-premise, without the need for Cloud calls, running on-device because their 3B and 40B LFMs perform at each scale, while maintaining a smaller memory footprint and more efficient inference.
Liquid AI can build private, edge, and on-premise AI solutions for enterprises of any size. Their partnerships with CapGemini, Japan’s CTC, and Deloitte have allowed them to test their LFM engine with real clients and they are currently expanding to financial services, biotechnology, and consumer electronics.
You can try Liquid AI’s LFMs you can find them on Liquid Playground, Lambda API, Perplexity Labs, and soon on Cerebras Inference. The LFM stack is being optimized for Nvidia, AMD, Qualcomm, Cerebras, and Apple hardware.
Thanks to Ramin’s co-founder, Professor Daniela Rus, one of the most visionaries roboticists in the world today who leads MIT’s Computer Science AI Lab (CSAIL), I was invited to attend the VIP Dinner in Boston las night – a powerful gathering of their investors and the local AI community, and today their product launch at a packed and fervent Kresge Auditorium on the MIT campus.
How do I feel when I am told that Liquid AI can guarantee a White-Box AI? An explainable AI? A trustworthy AI? I am ecstatic. I have been so frustrated the last two years with the “hallucination” antics of models still in training but in the hands of people, of concerning LLMs that offer more obstacles to their inferences than clarity, of unsustainable, energetically impossible advanced AIs, I feel like a landslide is coming, driven by a new paradigm, one where we do not have to force square pegs onto round holes.
Thank you Ramin Hasani, Daniela Rus, Mathias Lechner, Alexander Amini, and the incredibly accomplished Liquid AI crew. You brought us something full of promise, demonstrable, and undeniable. A long-awaited pivot that I am sure shall be embraced and adopted by many.
Today we unveil Liquid AI’s first products powered by LFMs. Join our livestream from MIT Kresge Auditorium https://bit.ly/3BWsUgf or on X. #LiquidAI #Future #MIT
Head of Pharmaceutical Development at Tetra Pharm Technologies
2wOne of the world leaders in Raman instrumentation, Oleksii Ilchenko 🇺🇦 great to see you in action