Liquid AI’s cover photo
Liquid AI

Liquid AI

Information Services

Cambridge, Massachusetts 15,988 followers

Build capable and efficient general-purpose AI systems at every scale.

About us

Our mission is to build capable and efficient general-purpose AI systems at every scale.

Website
http://liquid.ai
Industry
Information Services
Company size
11-50 employees
Headquarters
Cambridge, Massachusetts
Type
Privately Held
Founded
2023

Locations

Employees at Liquid AI

Updates

  • Introducing LFM-7B, our new best-in-class language model. The model is optimized for chat capabilities. It is the world’s best-in-class English, Arabic, and Japanese model, native in French, German, and Spanish, optimized to be the substrate for private enterprise chat, code, fast instruction following, and agentic workflows. www.liquid.ai/lfm-7b Chat Capabilities LFM-7B is specifically optimized for response quality, accuracy, and usefulness. To assess its chat capabilities, we leverage a diverse frontier LLM jury to compare responses generated by LFM-7B against other models in the 7B-8B parameter category. It allows us to reduce individual biases and produce more reliable comparisons. We compared answers to English prompts that include curated business use cases such as following instructions, questions from Arena-Hard-Auto, and real-world conversations. Thanks to our comprehensive preference alignment process, LFM-7B outperforms every LLM in the same size category. In a series of head-to-head chat capability evaluations, done by 4 frontier LLMs as jury, LFM-7B shows dominance over all other models in this size class. Automated Benchmarks LFM-7B maintains the core capabilities of expansive knowledge and reasoning similar to our other models. In addition to enhanced conversational skills, it also showcases improved coding and instruction-following abilities. Multilingual Capabilities LFM-7B supports English, Spanish, French, German, Chinese, Arabic, Japanese, and Korean. Arabic Arena we use a curated subset of real-world conversations in Arabic. LFM-7B is fluent in Arabic and significantly preferred over other models in the same size category. Japanese Arena For the Japanese arena, we use a combination of ELYZA-tasks-100 (Sasaki et al.) and real-world prompts curated by our partner ITOCHU Techno-Solutionsq Corporation (CTC). This creates a diverse set of prompts representative of business use cases. LFM-7B also leads our Japanese arena by a significant margin. Memory Efficiency Like our previous models, LFM-7B has a minimal memory footprint compared to other architectures. The memory efficiency of LFM-7B allows for several key features, including long-context understanding, energy-efficient inference, and high-throughput deployments on local devices. LFM-7B can also be efficiently customized to any knowledge or task using our on-premise fine-tuning stack. Consequently, LFM-7B significantly increases value for end users in applications such as private enterprise chat, secure code generation, fast instruction following, long document analysis, energy-efficient on-device AI assistants, and multi-step agentic workflows. Try the model today on Liquid Playground: https://lnkd.in/dSAnha9k and soon on Lambda API, Perplexity playground, Amazon Web Services (AWS) Marketplace and OpenRouter. Get in touch with us for API access, fine-tuning stack, purchasing the weights, and discussing use-cases: https://lnkd.in/dTbTwiyN

    • No alternative text description for this image
  • We are pleased to announce Omar Mir, International Board Member of World Wide Technology, joined us as a strategic member of our Advisory Council. Omar brings 20+ years of experience in the Tech industry with a strong focus on AI and advanced technologies across UK, Europe, the USA and the Middle East. He is and has been successfully advising a multitude of companies, from tech giants to tech startups-ups, and brings a strong global network of leaders, innovators and game-changers in technology, media and government. We look forward to working with Omar and welcome him to our team. 

    • No alternative text description for this image
  • Liquid AI reposted this

    View organization page for AMD

    1,677,018 followers

    Congratulations, Liquid AI! AMD is proud to support Liquid AI’s innovative approach to AI models and its next phase of growth.

    View organization page for Liquid AI

    15,988 followers

    We raised a $250M Series A led by AMD Ventures to scale Liquid Foundation Models and accelerate their deployment on-device and at enterprises. At Liquid, our mission is to build the most capable and efficient AI system at every scale. Our CEO, Ramin Hasani, says “we are proud that our new industry-leading partners trust our mission; together, we plan to unlock sovereign AI experiences for businesses and users.” “Liquid AI’s unique approach to developing efficient AI models will push the boundaries of AI, making it far more accessible,” said Mathew Hein, Senior Vice President and Chief Strategy Officer of Corporate Development at AMD. “We are thrilled to collaborate with Liquid AI to train and deploy their AI models on AMD Instinct GPUs and support their growth through this latest funding round.” Full announcement: https://lnkd.in/dt5Qr-Nn To get hands-on with LFMs and deploy them within your enterprise, reach out to our team: https://lnkd.in/dTbTwiyN To join Liquid AI’s mission, check out open positions across research, engineering, go-to-market, and operations: https://lnkd.in/dwpfpwyt #liquidai #amd #seriesa #genai

    • Liquid AI x AMD
  • We raised a $250M Series A led by AMD Ventures to scale Liquid Foundation Models and accelerate their deployment on-device and at enterprises. At Liquid, our mission is to build the most capable and efficient AI system at every scale. Our CEO, Ramin Hasani, says “we are proud that our new industry-leading partners trust our mission; together, we plan to unlock sovereign AI experiences for businesses and users.” “Liquid AI’s unique approach to developing efficient AI models will push the boundaries of AI, making it far more accessible,” said Mathew Hein, Senior Vice President and Chief Strategy Officer of Corporate Development at AMD. “We are thrilled to collaborate with Liquid AI to train and deploy their AI models on AMD Instinct GPUs and support their growth through this latest funding round.” Full announcement: https://lnkd.in/dt5Qr-Nn To get hands-on with LFMs and deploy them within your enterprise, reach out to our team: https://lnkd.in/dTbTwiyN To join Liquid AI’s mission, check out open positions across research, engineering, go-to-market, and operations: https://lnkd.in/dwpfpwyt #liquidai #amd #seriesa #genai

    • Liquid AI x AMD
  • New Liquid AI research: STAR — Evolutionary Synthesis of Tailored Architectures. At Liquid AI, we design foundation models with two macro-objectives: maximize quality and efficiency. The balance between the two is challenging. We built a new algorithm— STAR, to make progress towards this goal. Read more about it here: https://lnkd.in/drM_4YGk We first developed a new design theory for computational units of modern AI systems. We then used it to devise an efficient encoding into architecture genomes, and applied evolutionary algorithms to discover hundreds of new architecture designs. STAR’s capabilities has far-reaching implications: Thanks to the ability to optimize any mixture of metrics, combined with the versatility of our design space, we're witnessing continuous improvements in both the diversity and quality of synthesized designs The design space for model architectures is vast and versatile. With STAR we can optimize large populations of new architectures at scale for obtaining the highest performing models that satisfy given computational requirements, such as inference cache. Read the full paper here: https://lnkd.in/dj-TTUCD At Liquid AI, we are building hardware-aware, best-in-class, and efficient AI systems at every scale, with a design theory for models grounded in dynamical systems, signal processing, and numerical linear algebra. If this resonates with you consider joining us: https://lnkd.in/dwpfpwyt The amazing work of Armin Thomas, Rom Parnichkun, Alexander Amini, Michael Poli, Stefano Massaroli and the entire Liquid AI team!

  • Liquid AI reposted this

    View profile for Maxime Labonne

    Head of Post-Training @ Liquid AI

    This is the proudest release of my career :) At Liquid AI, we're launching three LLMs (1B, 3B, 40B MoE) with SOTA performance, based on a custom architecture. Minimal memory footprint & efficient inference bring long context tasks to edge devices for the first time! 📊 Performance We optimized LFMs to maximize knowledge capacity and multi-step reasoning. As a result, our 1B and 3B models significantly outperform transformer-based models in various benchmarks. And it scales: our 40B MoE (12B activated) is competitive with much bigger dense or MoE models. 🧠 Memory footprint The LFM architecture is also super memory efficient. While the KV cache in transformer-based LLMs explodes with long contexts, we keep it minimal, even with 1M tokens. This unlocks new applications, like document and book analysis with RAG, directly in your browser or on your phone. 🪟 Context window In this preview release, we focused on delivering the best-in-class 32k context window. These results are extremely promising, but we want to expand it to very, very long contexts. Here are our RULER scores (https://lnkd.in/e3xSX3MK) for LFM-3B ↓ 💧 LFM architecture The LFM architecture opens a new design space for foundation models. This is not restricted to language, but can be applied to other modalities: audio, time series, images, etc. It can also be optimized for specific platforms, like Apple, AMD, Qualcomm, and Cerebras. 💬 Feedback Please note that we're a (very) small team and this is only a preview release. 👀 Things are not perfect, but we'd love to get your feedback and identify our strengths and weaknesses. We're dedicated to improving and scaling LFMs to finally challenge the GPT architecture. 🥼 Open science We're not open-sourcing these models at the moment, but we want to contribute to the community by openly publishing our findings, methods, and interesting artifacts. We'll start by publishing scientific blog posts about LFMs, leading up to our product launch event on October 23, 2024. ✍️ Try LFMs! You can test LFMs today using the following links: Liquid AI Playground: https://lnkd.in/dSAnha9k Lambda: https://lnkd.in/dQFk_vpE Perplexity: https://lnkd.in/d4uubMj8 If you're interested, find more information in our blog post: https://lnkd.in/enxMjVez

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Liquid AI 3 total rounds

Last Round

Series A

US$ 250.0M

See more info on crunchbase