Today I had the honour to pass by the Santa 🎅 Happy bank holidays to all of you who celebrate 🎄
About us
We are your technology partner specializing in open-source technologies, and we will assist you in implementing online projects!
- Website
-
https://k8s.lt
External link for Leliuga
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Klaipeda
- Type
- Privately Held
Locations
-
Primary
Klaipeda, LT
Employees at Leliuga
Updates
-
Meet Willow, state-of-the-art quantum chip 🤯 Google Willow is not just a chip, it's a catalyst for a new era of computing. Discover how this groundbreaking technology can revolutionize your business, solve complex problems, and unlock unprecedented opportunities. Join us to explore: - First is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years. - Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe. Read full: https://lnkd.in/ewZkn5Br
-
EuroLLM-9B: Europe’s Most Advanced Multilingual Model! 🌍🤖 Key Highlights: Supports 35 Languages: Covers all 24 EU official languages + global ones. Top Performance: Outperforms European LLMs; rivals Gemma-2-9B & Mistral-7B. 4 Trillion Tokens: Trained with high-quality multilingual data. Pre-trained model: https://lnkd.in/dsRv5qmX Instruct model: https://lnkd.in/dt-H6wgX
-
Small changes can bring big benefits 🤯 My client's customers are construction and factory companies that need to supply their employees with work clothes, ordering new garments every month. Today, a small change was deployed: replacing the "dropdown" with a "table" to make selecting and adding variant products to the cart simpler and faster 🚀
-
AI21Labs just released Jamba 1.5 Mini & Large - MoE, permissively licensed, 256K context, Multilingual, JSON model & Tool use Jamba 1.5 Large - (94B active, 398B total) Jamba 1.5 Mini - (12B active/ 52B total) Architecture: Joint Attention with Mamba + Mixture of Experts (SSM + MoE) Context length: 256K Languages supported: English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew Weights: https://lnkd.in/dXuQj_m9
Jamba-1.5 - a ai21labs Collection
huggingface.co
-
FLUX.1 is a state-of-the-art 12 billion parameter rectified flow transformer that generates images from text descriptions, created by Black Forest Labs (a team of ex-Stability AI researchers who were original Stable Diffusion creators) ... Demo here: https://lnkd.in/dQSXrRpr
-
Context length winners 😎 Context length, the amount of preceding text an AI considers, is crucial for generating coherent and relevant content. In long-form content like articles or essays, a longer context helps maintain a consistent narrative, ensuring each sentence logically follows the previous ones. This is vital for creating unified and comprehensive pieces. In conversational AI, such as chatbots, context length enhances user experience by remembering past interactions and providing contextually appropriate responses. A short context length can lead to repetitive or irrelevant answers, frustrating users and diminishing the AI's perceived intelligence. Longer context lengths also enable AI to understand complex and nuanced information, essential for tasks like summarization, translation, and content recommendation. For instance, in machine translation, a broader context improves accuracy and fluency.
-
Llama 3.1 405B matches or beats the Openai GPT-4o across many text benchmarks! New and improvements of 3.1: - 8B, 70B & 405B versions as Instruct and Base with 128k context - Multilingual, supports 8 languages, including English, German, French, and more. - Trained on >15T Tokens & fine-tuned on 25M human and synthetic samples - Commercial friendly license with allowance to use model outputs to improve other LLMs - Quantized versions in FP8, AWQ, and GPTQ for efficient inference. - Llama 3 405B matches and beast GPT-4o on many benchmarks - 8B & 70B improved Coding and instruction, following up to 12% - Supports Tool use and Function Calling Blog: https://lnkd.in/g9yTBFnv Model Collection: https://lnkd.in/g_bVRpmp
Llama 3.1 - 405B, 70B & 8B with multilinguality and long context
huggingface.co