Liquid AI

Liquid AI

Information Services

Cambridge, Massachusetts 14,305 followers

Build capable and efficient general-purpose AI systems at every scale.

About us

Our mission is to build capable and efficient general-purpose AI systems at every scale.

Website
http://liquid.ai
Industry
Information Services
Company size
11-50 employees
Headquarters
Cambridge, Massachusetts
Type
Privately Held
Founded
2023

Locations

Employees at Liquid AI

Updates

  • Liquid AI reposted this

    View organization page for AMD, graphic

    1,633,564 followers

    Congratulations, Liquid AI! AMD is proud to support Liquid AI’s innovative approach to AI models and its next phase of growth.

    View organization page for Liquid AI, graphic

    14,305 followers

    We raised a $250M Series A led by AMD Ventures to scale Liquid Foundation Models and accelerate their deployment on-device and at enterprises. At Liquid, our mission is to build the most capable and efficient AI system at every scale. Our CEO, Ramin Hasani, says “we are proud that our new industry-leading partners trust our mission; together, we plan to unlock sovereign AI experiences for businesses and users.” “Liquid AI’s unique approach to developing efficient AI models will push the boundaries of AI, making it far more accessible,” said Mathew Hein, Senior Vice President and Chief Strategy Officer of Corporate Development at AMD. “We are thrilled to collaborate with Liquid AI to train and deploy their AI models on AMD Instinct GPUs and support their growth through this latest funding round.” Full announcement: https://lnkd.in/dt5Qr-Nn To get hands-on with LFMs and deploy them within your enterprise, reach out to our team: https://lnkd.in/dTbTwiyN To join Liquid AI’s mission, check out open positions across research, engineering, go-to-market, and operations: https://lnkd.in/dwpfpwyt #liquidai #amd #seriesa #genai

    • Liquid AI x AMD
  • We raised a $250M Series A led by AMD Ventures to scale Liquid Foundation Models and accelerate their deployment on-device and at enterprises. At Liquid, our mission is to build the most capable and efficient AI system at every scale. Our CEO, Ramin Hasani, says “we are proud that our new industry-leading partners trust our mission; together, we plan to unlock sovereign AI experiences for businesses and users.” “Liquid AI’s unique approach to developing efficient AI models will push the boundaries of AI, making it far more accessible,” said Mathew Hein, Senior Vice President and Chief Strategy Officer of Corporate Development at AMD. “We are thrilled to collaborate with Liquid AI to train and deploy their AI models on AMD Instinct GPUs and support their growth through this latest funding round.” Full announcement: https://lnkd.in/dt5Qr-Nn To get hands-on with LFMs and deploy them within your enterprise, reach out to our team: https://lnkd.in/dTbTwiyN To join Liquid AI’s mission, check out open positions across research, engineering, go-to-market, and operations: https://lnkd.in/dwpfpwyt #liquidai #amd #seriesa #genai

    • Liquid AI x AMD
  • New Liquid AI research: STAR — Evolutionary Synthesis of Tailored Architectures. At Liquid AI, we design foundation models with two macro-objectives: maximize quality and efficiency. The balance between the two is challenging. We built a new algorithm— STAR, to make progress towards this goal. Read more about it here: https://lnkd.in/drM_4YGk We first developed a new design theory for computational units of modern AI systems. We then used it to devise an efficient encoding into architecture genomes, and applied evolutionary algorithms to discover hundreds of new architecture designs. STAR’s capabilities has far-reaching implications: Thanks to the ability to optimize any mixture of metrics, combined with the versatility of our design space, we're witnessing continuous improvements in both the diversity and quality of synthesized designs The design space for model architectures is vast and versatile. With STAR we can optimize large populations of new architectures at scale for obtaining the highest performing models that satisfy given computational requirements, such as inference cache. Read the full paper here: https://lnkd.in/dj-TTUCD At Liquid AI, we are building hardware-aware, best-in-class, and efficient AI systems at every scale, with a design theory for models grounded in dynamical systems, signal processing, and numerical linear algebra. If this resonates with you consider joining us: https://lnkd.in/dwpfpwyt The amazing work of Armin Thomas, Rom Parnichkun, Alexander Amini, Michael Poli, Stefano Massaroli and the entire Liquid AI team!

  • Liquid AI reposted this

    View profile for Maxime Labonne, graphic

    Head of Post-Training @ Liquid AI

    This is the proudest release of my career :) At Liquid AI, we're launching three LLMs (1B, 3B, 40B MoE) with SOTA performance, based on a custom architecture. Minimal memory footprint & efficient inference bring long context tasks to edge devices for the first time! 📊 Performance We optimized LFMs to maximize knowledge capacity and multi-step reasoning. As a result, our 1B and 3B models significantly outperform transformer-based models in various benchmarks. And it scales: our 40B MoE (12B activated) is competitive with much bigger dense or MoE models. 🧠 Memory footprint The LFM architecture is also super memory efficient. While the KV cache in transformer-based LLMs explodes with long contexts, we keep it minimal, even with 1M tokens. This unlocks new applications, like document and book analysis with RAG, directly in your browser or on your phone. 🪟 Context window In this preview release, we focused on delivering the best-in-class 32k context window. These results are extremely promising, but we want to expand it to very, very long contexts. Here are our RULER scores (https://lnkd.in/e3xSX3MK) for LFM-3B ↓ 💧 LFM architecture The LFM architecture opens a new design space for foundation models. This is not restricted to language, but can be applied to other modalities: audio, time series, images, etc. It can also be optimized for specific platforms, like Apple, AMD, Qualcomm, and Cerebras. 💬 Feedback Please note that we're a (very) small team and this is only a preview release. 👀 Things are not perfect, but we'd love to get your feedback and identify our strengths and weaknesses. We're dedicated to improving and scaling LFMs to finally challenge the GPT architecture. 🥼 Open science We're not open-sourcing these models at the moment, but we want to contribute to the community by openly publishing our findings, methods, and interesting artifacts. We'll start by publishing scientific blog posts about LFMs, leading up to our product launch event on October 23, 2024. ✍️ Try LFMs! You can test LFMs today using the following links: Liquid AI Playground: https://lnkd.in/dSAnha9k Lambda: https://lnkd.in/dQFk_vpE Perplexity: https://lnkd.in/d4uubMj8 If you're interested, find more information in our blog post: https://lnkd.in/enxMjVez

    • No alternative text description for this image
  • Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B. LFM-1B performs well on many public benchmarks in the 1B category, making it the new state-of-the-art model at this size. This is the first time a non-GPT architecture significantly outperforms transformer-based models. LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. LFM-40B offers a new balance between model size and output quality. It leverages 12B activated parameters at use. Its performance is comparable to models larger than itself, while its MoE architecture enables higher throughput and deployment on more cost-effective hardware. LFMs are large neural networks built with computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra. LFMs are Memory efficient LFMs have a reduced memory footprint compared to transformer architectures. This is particularly true for long inputs, where the KV cache in transformer-based LLMs grows linearly with sequence length. Read the full blog post: https://lnkd.in/dhSZuzSS Read more on our research: https://lnkd.in/dHwztmfi Try LFMs today on  Liquid AI Playground: https://lnkd.in/dSAnha9k Lambdahttps://lnkd.in/dQFk_vpE Perplexity: https://lnkd.in/d4uubMj8 Get in touch with us: https://lnkd.in/dttAbPgs Join our team: https://lnkd.in/dwpfpwyt Liquid Product Launch Event - Oct 23, 2024 Cambridge, MA Come join us at MIT Kresge, Cambridge, MA on October 23rd 2024, to learn more about Liquid as we unveil more products and progress on LFMs and their applications in consumer electronics, finance, healthcare, biotechnology, and more! RSVP: https://lnkd.in/dYhxqFHU

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Liquid AI 3 total rounds

Last Round

Series A

US$ 250.0M

See more info on crunchbase