Our latest course LLM Apps: Evaluation is now LIVE! 🎉 In this code-first course, you’ll learn: • Best practices for evaluation metrics, datasets and human annotations • Lessons on building and aligning LLM judges • Industry expertise from Weights & Biases, Google and All Hands AI instructors 📚 Course Highlights: • Evaluation fundamentals & metrics • Programmatic evaluation implementation • LLM Judges: design & alignment • Google Case Study: Imagen, Veo 2 and tool use • OpenHands Case Study: evaluating agents Instructors: Ayush Thakur - AI Engineer, Weights & Biases Anish Shah - AI Engineer, Weights & Biases Paige Bailey - AI Developer Relations Lead, Google Graham Neubig - Co-Founder, All Hands AI 🎓 Start learning now: https://lnkd.in/gCHffA24
Weights & Biases’ Post
More Relevant Posts
-
I'm excited to share that I'm working on a technical guide with O'Reilly and open-source framework provider deepset on Retrieval Augmented Generation in Production 🚀 The book is scheduled for late 2024, an early draft of Chapter 1 is available now https://lnkd.in/ekBqFik9 While we've seen many RAG tutorials, its often unclear how to piece together various its of key information to make LLM based applications that benefit customers, going beyond POCs. This book covers : 🔍 How do you integrate Generative AI into industry settings? 💡 How does RAG translate product ideas into reality? 📊 How do you evaluate and optimize LLM-based applications? ⚙️ How do you build scalable systems? 🛡️ How do you prioritize safety in LLM system design? I’ve learned a lot during the research, writing, and discussing with the awesome folks from Haystack. I hope you’ll find this useful. Feedback is much appreciated! #AI #MachineLearning #RAG #LLMs
To view or add a comment, sign in
-
Struggling to stay ahead in the fast-paced world of AI? We’ve got you covered! 🙌 Our LLM bootcamp is your gateway to mastering the tools that are shaping the future. The best part? All you need is the drive to learn, a passion for problem-solving, and a desire to build practical, actionable skills to excel! 💡 Secure your spot today: https://hubs.la/Q02V8Q3V0 What You’ll Learn: 🔹LLM and generative AI landscape 🔹Barriers to enterprise adoption of generative AI 🔹Risks and challenges in building LLM applications 🔹Embeddings, attention mechanism and transformer architecture 🔹A practical introduction to vector databases 🔹Building LLM applications with LangChain 🔹Fine-tuning and deploying large language models 🔹LLM observability and monitoring 🔹Evaluation of LLM applications 🔹Guardrails and responsible AI 🔹Challenges in building RAG applications 🔹Domain and task-specific LLMs 🔹Productionizing LLM applications 🔹Capstone project: Building and deploying an LLM application using GitHub and Streamlit Whether you're diving into AI for the first time or advancing your expertise, this bootcamp covers it all! #LLMDojo #LLMs #MachineLearning #AI #GenerativeAI
To view or add a comment, sign in
-
I'm very picky on who I learn LLM from. Here are Top 3 creators on each platform i learn LLM Engineering from. Youtube 1. Sam Witteveen: Latest AI news with Usage 2. AI Jason: Step by step guides for advanced llm systems. 3. Dave Ebbelaar: Founder of Data Lumina. Production RAG tutorials. Twitter 1. Jason Liu: Founder of Instructor. Famous quote "Pydantic is all you need" literally inspired OpenAI to come out with Structured Output. 2. NutLope: Founder of Together AI. Released multiple AI apps that have over 200k+ users. Best part is, they are all Open-Source! 3. Alex Albert: Head of Claude Relations, share the best approach to building LLM systems.
To view or add a comment, sign in
-
How do you get the best results from LLMs? After months of hands-on work, I've found that simpler approaches often outperform complex ones. Let's break down the main strategies, from basic to advanced: Basic Prompting: Just tell the LLM what you want—clear, straightforward instructions. Quick to set up, and often all you really need. I've seen this work great for a lot of tasks. Few-Shot Learning: Show the model 2-3 examples of what good output looks like. Like training a new team member by demonstrating the task first. This small extra step can make a big difference in output quality. RAG (Retrieval Augmented Generation): Give your LLM access to your specific data or documents. Great for when you need accurate, up-to-date responses based on your company's information. Fine-Tuning: Teaching the model your specific domain knowledge and requirements. It's more work, but sometimes worth it when you need highly specialized behavior from the LLM. Pretraining: Building your own LLM from scratch - Unless you've got Google-sized compute power and a few millions to spare, you might want to stick to the options above! 😄 What I've Learned: - Start with basic prompting - it's more powerful than most teams expect - Add few-shot examples when you need better accuracy - Save fine-tuning for when you really need it - Don't overlook RAG - feeding relevant context to your model can dramatically improve results You can solve most use cases with good prompting and RAG. No complex fine-tuning needed. Remember, less is more. #LLM #Development #AI
To view or add a comment, sign in
-
📖 Hey teachers! ✏️ ⌚️⏳Wanna get your TIME back? ⏰ Join our webinar and discover the best AI tool to use alongside your students. From feedback to grading, to data driven next steps…all in under 10 minutes. This tool changed everything about my teaching! https://lnkd.in/gi8sjqmX
To view or add a comment, sign in
-
🌟 Exciting News for All Aspiring Data Enthusiasts and AI Innovators! 🌟 Google has just released the latest version of their FREE Machine Learning Crash Course, now upgraded with lessons on AI! This is a golden opportunity for anyone looking to dive into the world of AI, whether you're building AI models, leveraging them for business, or just curious about how it all works. Here’s what’s new and exciting in this version: ✅ Deep Dive into AI: Includes lessons on cutting-edge concepts like Large Language Models (LLMs) and AutoML. ✅ Expanded Data Handling: Learn best practices for working with data and building Responsible AI systems. ✅ Interactive Learning: Bite-sized video explanations and hands-on exercises to make learning engaging and effective. ✅ Skill-Building Challenges: Over 130+ practice questions to test your understanding and sharpen your skills. ✅ Achievement Badges: Track your progress and show off your new knowledge! 🏆 Whether you're a beginner or already in the tech world, this course is your gateway to understanding the future of AI and Machine Learning. 🎓 🔗 Get started today: https://lnkd.in/gMUpPjc At Bankopedia.pk, we believe in empowering our community with tools and knowledge to stay ahead in the evolving world of technology. Don’t miss this chance to enhance your skills and prepare for the AI-driven future! Let’s learn, grow, and lead together. 🚀
To view or add a comment, sign in
-
Just finished the course “Google Gemini for Developers” by Lynn Langit! Check it out: https://lnkd.in/eZFZ5uYV #generativeai. Nice to see how Google is making building custom LLM accessible with Vertex AI Agent Builder, more info here: https://lnkd.in/eK4Pdzyx
To view or add a comment, sign in
-
Thanks to MIT for extraordinary professional learning where now, I’m at future. Knowing how proficiently our machines off-load code, and meet us so capably mind to machine - where in 3 seconds end-to-end it satiates all of my toddler ‘why’ questions as well my ‘how’s. But where my human experience provides the guideposts of responsible AI ML.
To view or add a comment, sign in
-
✨ gemini-exp-1206 released! (and it's FANTASTIC) here's the TL;DR: - better than o1 preview on coding and mathematics benchmarks, and FREE through Google's AI Studio and their API (albeit with some rate limits) - 2 million token context window (supports text, image, audio, and video inputs) - based on my testing (anecdotal), this model is much better than claude 3.5 sonnet - my 'daily driver' - for complex coding tasks, which is in no small part due to that gargantuan 2 million token context window. - within Google AI studio, I would grade the inference speed as 'nothing special, but perfectly usable' the moat has dried up, and I'm here for it. in my mind, this points to a few things: 1. wouldn't be surprised if Anthropic releases Claude 3.5 Opus soon in response 2. Gemini 2 on the way? maybe we misjudged Google's AI prowess. It seems abundantly clear they've doubled down and are here to win. I haven't been this consistently pumped for Gen AI since GPT-3 released - just a nerd with a bucket of popcorn watching it all unfold in real-time. #genai #ai #gemini #googleai #claude #openai #o1 #llm #machinelearning
To view or add a comment, sign in
-
Big news I am thrilled to launch my new course, "LLMs in Production" on Uplimit. In this course, you will play the role of a founding engineer at a GenAI startup building a Text2SQL product. Here’s what you’ll master: - Evaluating LLMs for top-notch SQL code generation. - Setting up Guardrails to ensure greater SQL code execution. - Accelerating Performance with semantic caching. - Crafting Defensive UX to navigate and communicate app limits. - Implementing Monitoring & Observability to track your LLMs performance diligently. If this interests you, I have added a link to the course in the comments!
🔥 Ready to take your LLM skills to the next level? Our advanced course, "LLMs in Production," led by Yudhiesh Ravindranath deep into deploying and managing large language models in real-world scenarios. Gain practical skills in performance evaluation, cost efficiency, quality control, UI design, and monitoring. Enroll now: #LLMs #AdvancedCourse #AI
To view or add a comment, sign in
76,409 followers
Director Applied AI @ Weights & Biases
4dCongrats team!