¡En Comet estamos muy contentos de anunciar que nuestro nuevo curso “Prompt Engineering for Vision Models” es también el primer curso de DLAI que ofrece subtítulos en español! Probablamente estés familiarizado con el uso de LLMs mediante prompts de texto, pero, en general, los prompts no se limitan sólo a texto ni se utilizan únicamente en LLMs. En este curso podrás profundizar en técnicas avanzadas para utilizar prompts con modelos de visión como SAM, OWL-ViT o Stable Diffusion y aprender cómo personalizar la generación de imágenes con Dreambooth. Inscríbete gratis:
Benjamin ELkrieff
Technology, Information and Internet
ML Ops expert, with 6 years of experience helping teams build their ML Stack
About us
- Industry
- Technology, Information and Internet
- Company size
- 1 employee
- Type
- Self-Owned
Updates
-
As large language models get larger and larger, and access to compute becomes even more competitive, optimization techniques are more important than ever. Speculative decoding is an optimization technique for inference that makes educated guesses about future tokens while generating the current token, all within a single forward pass. Learn more in “A Hitchhiker’s Guide to Speculative Decoding” from the PyTorch blog:
A Hitchhiker's Guide to Speculative Decoding
-
Comet is very excited to announce our new DeepLearning.AI short course: Prompt Engineering for Vision Models! Learn the fundamentals of using natural language, other images, pixel coordinates, segmentation masks, and bounding boxes to prompt leading foundational computer vision models. Enroll today for free:
Andrew Ng on LinkedIn: In Prompt Engineering for Vision Models, taught by Abby Morgan, Jacques... | 18 comments
-
Looking forward to the presentation from John Snow Labs at Convergence Conference 2024!
Comet on LinkedIn: #sparknlp #generativeai #mlops #ai #technology #innovation
linkedin.com
-
⏰ Just 1 week to go until Convergence Conference! 👩🚀 Don't miss this opportunity to connect for free with an inspiring community and leading voices in #MachineLearning and #ArtificialIntelligence. 🚀 Dive into 15+ sessions and workshops showcasing real-world #ML and #AI innovations. Plus, access all content on demand for two weeks post-event. 👉 Secure your spot now: #MLOps #LLMOps #Technology #Innovation Comet
Convergence
comet.com
-
A feature pipeline is responsible for taking raw data as input, processing it into features, and storing it in a feature store, from which the training and inference pipelines will access it. In this 4th lesson of “Build Your LLM Twin” series from Decoding ML, Paul Iusztin demonstrates how to build a feature pipeline using a MongoDB warehouse, Qdrant vector database, and Bytewax streaming engine. Read the full article:
Streaming Pipelines for Fine-tuning LLMs and RAG in Real-Time
comet.com
-
Ray is an #OpenSource project that makes it simple to scale any compute-intensive #Python workload — from #DeepLearning to production model serving. With a rich set of libraries and integrations built on a flexible distributed execution framework, Ray makes distributed computing easy and accessible and abstracts away the complexity of setting up a distributed training system. And Comet integrates fully with Ray Train and Ray Tune. Check out the Colabs to get started now: Comet + Ray Train Colab: https://lnkd.in/dJYjbNpX Comet + Ray Tune Colab: https://lnkd.in/dMixxGu6 .
-
💡 Did you know you can use SDXL 1.0 (base + refiner) programmatically for image inpainting? 🎨 What’s image inpainting? 🎭 Inpainting is the process of filling in data within a specific region of an image. Sometimes this data may be missing, or sometimes it may be intentionally masked out. 👇 Learn more in this full-code tutorial: #MLOps #ComputerVision Stability AI Comet
Image Inpainting for SDXL 1.0 Base Model + Refiner
-
#MachineLearning competitions are the perfect sandbox for figuring out what works— and what doesn’t— in the #ML landscape. So what were the most popular uses of #LLMs in competitions last year? Interestingly, the most common use of LLMs wasn’t even directly related to the problem at hand at all. It was code completion. Other notable uses of LLMs included: - Idea generation - Synthetic data generation - Classification Read the full report from ML Contests (and sponsored in part by Comet) for more detail: #MLOps #LLMOps
The State of Competitive Machine Learning | ML Contests
mlcontests.com