I wanted to give you a heads-up on an exciting new tutorial I’ve got in the works. This video is all about building a full-stack SaaS application that pulls in YouTube comments, processes them with CrewAI enterprise, and generates actionable content ideas. If you’re interested in building full-stack AI applications, this is for you!
🛠️ Here’s What You’ll Learn
In this tutorial, I’ll guide you through each part of the process of creating this full-stack application. Here’s a breakdown:
1️⃣ Building the Frontend and Backend: We’ll use Next.js to set up the app and deploy it on Vercel. Plus, you’ll get hands-on experience connecting to a Neon Postgres database to manage data.
2️⃣ Integrating with CrewAI Enterprise: Learn how to harness CrewAI’s enterprise features to analyze YouTube comments, filter out casual messages, and focus on meaningful feedback. You’ll see how to create a system that automates data analysis and transforms raw comments into structured, actionable insights.
3️⃣ Generating Video Titles and Descriptions: We’ll configure CrewAI to turn filtered comments into potential video titles and descriptions. You’ll build a workflow that streamlines idea generation and content planning, using CrewAI’s advanced capabilities.
💡 Why This Tutorial is Worth Your Time
This tutorial doesn’t just cover building one app—it teaches you how to apply the synthesize pattern for data processing, a core skill in AI development. Here’s why this matters:
✅ Real-World Adaptability: The synthesize pattern goes beyond YouTube. After learning it, you can apply it to dozens of other applications where large datasets need to be turned into insights. Imagine using this pattern for customer feedback, product analysis, or trend monitoring—there are endless opportunities to build your own AI-powered apps!
✅ Hands-On Full-Stack Skills: Get practical experience with tools like Next.js, Vercel, and Neon, and learn how to bring everything together into a seamless app.
📅 How to Catch the Release
I’ll be releasing this video on my YouTube channel. Head over there, subscribe, and turn on notifications to be the first to know when it drops! I’ll also have links to my YouTube, my free Skool community, and the Pro community down in the comments below.
Drop any questions you have, and see you soon! 👋
CrewAINeonVercel
What have everybody hope you're having an awesome day. I wanted to, in this quick video, announce a new video that's going to be coming out to YouTube later this week. And I want to give you a sneak peek of exactly what I'm going to be building for you guys and releasing. So in a nutshell, I'm going to be creating a new full stack SAS application for you guys that takes in YouTube comments and then uses Crew AI to synthesize through all of those comments and then go off and generate new video suggestions. So here's exactly what's going to go into building this application and everything that you're going to learn. First off, we're going to be using next, yes, to build our entire application, and we're going to be deploying the application over to our cell. When it comes to our database, we're going to be using Neon as a Postgres database. And then for all the magic, we're going to be using Crew AI to synthesize through all of those different comments and extract actionable video suggestions that we should go ahead and build. So let's dive in a little bit deeper into what's actually going to be happening as like the core loop. So what we're going to do is first, for whatever channel we add to our web app, we're going to 1st go ahead and scrape. All the videos and comments for that YouTube channel. So on mine, it would pull down like all my videos that I have. And for each video, we're going to go ahead and grab all those comments. Then we're going to package all of that information up and send it to crew AI. And this is where like the core magic happens and I'm excited to walk you through what happens. So First off, we're going to go ahead and filter out all the actionable comments. So all the like, thank you, this is awesome. We're going to filter those out and focus specifically on the comments like, hey, this was awesome. Can you please share some more information on how to use the flows? Or can you please go ahead and expand and talk a little bit more about how I could use Rag with this tool? So those type of comments, the questions, the actionable ones, we're going to pull all those out. Then in another task inside of Crei, and all this is going to be happening in Crai Enterprise running in the cloud completely for free. What we're going to do next is then turn those comments into actionable titles and descriptions. That way we can go ahead and start generating ideas. So once we have gone from comment to title, that that's about a new video that we could go off and create. We're then going to go off and use Create I once again to research all existing videos for that specific title and description. That way we can kind of see what's already going on in YouTube, what's working, what's not working. That way, whenever we do go off and create a video, we already know it's going to be a hard hitting video. And that's exactly what's going to be happening in the fourth step. Once we've gone through, done the research, we're then going to get to the final part to where we're going to score that idea one to 10. Is this a great idea to do or is this not a great idea to do and this is all going to be happening running in Korea enterprise in the cloud. And then what's going to happen is we're going to get back all the generated video ideas with their scores, titles, descriptions. That way as a creator, you know, save me a bunch of times, I don't have to go off and research what's trending, what's not trending? What are people saying, you know, AI, it's going to do all the heavy lifting for me and just say like, hey, here's 5 great video topics that you should probably explore because these ideas are already in demand from your viewers. So that's the core loop. So I'm pumped to be building this and I hope you guys can kind of see like, wow, this actually would be a pretty cool project to build. And the main thing I did. And like talk about of like why I think this video is so important is because it follows the synthesized pattern. So not only are you just going to learn how to, you know, in this video, just go from comments over to YouTube videos, Well, this is what really is happening. And the core pattern that you're going to be able to learn and go off and apply in a bunch of different niches is to learn how to really just synthesize large amounts of data, pass it over to crew AI, and then get actionable insights out of it. So for example, a few different ways you could take this. Black video and apply it elsewhere is to go off and imagine you're working at a big company and you had a ton of monthly status reports. Well, you could just pass those in and then pull out actionable insights of like, oh, this project is falling behind red flag. We need to start working on that or imagine you were working at a I have a few different examples. Let me go grab for you guys. Another one is if you're working on building a drop shipping app for Facebook or for Amazon or some other platform, what you could do is just analyze a ton of. Products that are happening or that are selling all their analytics, once again, pass all that raw data over to crew I and set up a crew to pull out actionable insights of like, hey, based on everything that you just passed into me, it looks like product A&B are awesome and it would do really well if you were to go off and start selling them. So hopefully you're starting to see like these are just two examples of going from large amounts of raw data and then pass it over to a crew to churn through everything and start giving you actionable insights that you can go off and do. So the biggest thing, we're just helping people save a ton of time. So they can focus on the high leverage part, which is just really taking actionable insights. So I'm so pumped. Let me give you a quick overview of exactly what I'm going to be putting together and building for you guys, just so you can kind of see exactly what we're going to be building in this application. So I have a very busy night ahead of me and, and Sunday, but I'm going to be cranking through building this out for you guys. So First things first, we're going to be building a pretty straightforward app that's going to have a login view so we can make sure whoever's logged in, we're able to track them and save all the data. For that specific user, we're also going to have a video list section. So this is where once we a user inside of their app updates, hey, I'd like to listen to these channels. We're going to go ahead and be able to pull down all the different videos that that creator or creators have created. And we're going to be able to see all the videos here on our video list view. Finally, what we're going to do is be able to look at a video detail view. So this is we're going to see all the comments that we've scraped. Then finally what we're going to do is this is where the magic comes in and we're going to go off and start and working on our. Ideas list view so once we do have some new comments, we can click generate and this will go ahead and package all the videos and comments cinema over to crew AI where it will churn through those and crank out some actionable insights and in this case new YouTube videos that we could produce. So you're going to see like, oh here's the title here's the score here's a quick description of what you could talk about and then when it comes to research what we'll do is like, hey, we found four videos that are similar. So whenever you click it'll open up a little model so you can do a more deep dive into the. Video idea that was being suggested to you so yeah, I'm so excited to go ahead and build this for you guys and one thing I did want to talk about so everything I just mentioned, I'm going to be launching giving away the video, the tutorial and the source code completely for free on YouTube and then for everyone in my AI developer Accelerator Pro program, what I'm going to be doing is going to go ahead and making three big changes so change one. Everything I showed you is manual like you're clicking generate, you're clicking scrape. All this is happening but what I'll do is show you guys how to go even deeper and automate. Everything so that what we could do is for anyone who subscribes to our service just at the end of the week, what we'll be able to do is just say like, hey, here's what has been happening inside of YouTube. We've pulled these comments and we've identified these six videos that you should go off and generate. So we're just going to turn this all into an automated process. That way just, you know, every Sunday they're going to open up their e-mail inbox and get a really nice looking, well formatted video. And this once again will be also paid using Crayola, this e-mail that we sent out to them. So we'll get sent over into and we'll do that all using Corey. I so all around super excited to show you guys this. And we'll also be adding in Stripe just so you can kind of see how you would go about adding in like a paywall. And like, yeah, we'll do one video, maybe 5 video ideas generations for you completely for free. But then after that, you know, you got to charge. So that's everything that's going to be happening over inside my AI developer accelerator program. So I've linked all this down below, but like I said, I'm just so pumped to go ahead and crank this out for you guys. If you have any questions, please let me know. And yeah, busy week. I got a caffeinated drink. Coffee, go ahead and crank all this out, but you have a great day and I can't wait to see you later. See ya.
We are truly grateful to be in a generation where top-notch content is at our fingertips, empowering us to upskill ourselves at no cost, Thankyou so much for this
Thanks a lot Brandon for all the helpful content you share on YouTube. it's truly amazing! 🎉
If you’re exploring new ideas, I'd love your take on a setup where there’s a central RAG agent that holds all the knowledge, and other agents request this RAG agent for different tasks. I think it could open up some exciting possibilities!
Brandon, You are really doing wonderful community service in building knowledge which helps developers stay relevant to technology dynamics. Thanks a lot!!
Court sy of n8n, here is an open-source, self-hosted AI starter kit.
This template will bootstrap a fully-featured low-code development environment to build AI applications:
https://lnkd.in/eUujnbBw
You get 4 components:
• A low-code platform with 400+ AI components and integrations
• Ollama, to run your models locally
• A high-performance vector store
• PostgreSQL
If you haven't used n8n before, they offer a visual workflow where you can build AI applications by connecting different native components, APIs, and AI agents. n8n is sponsoring this post.
They have hundreds of templates. For example, here is an autonomous AI crawler that will navigate a website and download any social media profile links:
https://lnkd.in/eFPxu8Tk
Using a starter kit like this one has several benefits:
• You can build applications using local models
• You won't have to pay (because you'll host the model)
• The process is straightforward
• The starter templates are of huge help
• The integrations will let you build almost anything
Anything you need to start is right in the repository.
Just released: Atomic Agents v0.2.1 🎉
I'm excited to share the latest update to our open-source library for AI agent development.
We've put a lot of thought into this release, focusing on making the codebase more intuitive and developer-friendly.
What's new:
• Even more streamlined naming conventions for easier understanding and better maintainability.
• Removed redundant components to simplify the architecture (Keep It Simple, Stupid!)
• Added full test coverage for improved reliability (Yes, you read that correctly, 100% test coverage)
• Included a cool Google Mesop example
• Boosted code quality with better linting using Black & Flake8
If you're into AI development or just curious about building intelligent agents, I'd love for you to check it out: https://lnkd.in/ePeB4K7J or simply install it through pypi by using "pip install atomic-agents"
As always, we really appreciate any feedback or contributions from the community. Your input helps make this project even better!
#OpenSource#AI#SoftwareDevelopment#AgenticAI
API + AI = Upgraded developer life quality?
And that's possible because of...
A new product from Treblle.
Meet Alfred, your new AI assistant created to:
- Make API integrations 10x faster.
- Enhance developer experience by doing the heavy lifting for you.
Setup and integration are done in two steps under 60 seconds:
- Install SDK
- Add 2 lines from the docs (HTML + JS)
By adding Alfred, you'll get benefits like:
- API discovery:
Just ask Alfred what you're looking for, and he'll provide an answer in seconds
- API integration:
Generate code and models in seconds instead of spending hours or days
- API adaptation:
Easily navigate through all your APIs, enhancing adoption and developer experience.
I've played around and was pretty surprised at how AI can help us with our everyday jobs.
(And not taking our jobs!)
So, don't wait any longer.
Be like Batman.
Get your Alfred here: https://shorturl.at/ukdfM
Let me know in the comments what you think! ✍
I’ve been watching the explosion of AI in developer tools lately, and I’m convinced it’s not _all_ hype. Something genuinely new is happening. Companies are rushing to add AI capabilities everywhere—some of it’s messy experimentation, some of it’s already delivering value, and the rest is still shaking out. But the direction is clear: AI is becoming part of the fabric of how folks build software.
Take code generation - GitHub Copilot, Vercel’s V0, Cursor, Codeium’s Windsurfer, AWS Q, Sourcegraph Cody—these tools all help write code for you. They’re getting so good, it’s starting to feel commoditized. If one tool doesn’t impress you this month, another one will pop up soon that’s even smarter. It’s wild how fast things are moving.
Beyond codegen, AI is creeping into other parts of the dev stack. Observability tools like Datadog are using AI to pinpoint issues. Documentation tools like Swimm and Docify are auto-generating and maintaining docs. Security and analysis platforms - Snyk, Checkmarx, Semgrep - are trying to detect vulnerabilities and risky patterns with fewer false alarms. Even feature flags and data analytics tools (LaunchDarkly, PopSQL, Hex) are dabbling in AI-powered insights.
But what’s truly interesting is what hasn’t happened yet. We haven’t seen as much AI in the heavy ops side: builds, deployments, hosting. These are the gritty, performance-critical workflows that devs rely on every day. They’re deterministic, stable, and predictable—introducing “intelligence” feels risky. Yet I suspect that over time, AI will break into these areas too. Imagine a system that predicts test failures before running them, reorders build tasks for faster feedback, or orchestrates safer, smarter deployments. That’s where things get really exciting.
https://lnkd.in/eQCDW4Fu
📢 Today, I’m excited to share that I’ve launched KitchenAI on Product Hunt!
KitchenAI is an open-source LLMOps tool I've built to solve a frustrating problem for AI dev teams.
Over the past year of building AI-enabled SaaS applications, I kept hitting the same wall. Going from a Jupyter notebook full of AI RAG techniques to something usable in my app was a nightmare.
Here's the problem:
- Notebooks are great for testing ideas, but they’re not meant for building applications around them.
- I had to manually dissect notebooks, build a proof-of-concept API server, integrate it into my app, and pray it worked.
- The feedback loop was *painfully* long—and most of the time, I canned the project because it didn’t quite fit.
This frustration comes from a gap in roles:
1. Data Scientists/AI Devs want notebooks to experiment with methods and techniques—but it's not their main focus to also create an API for other applications to use.
2. App Developers just want simple APIs to test and integrate quickly to see if it actually enhances their app.
This is where KitchenAI comes in. KitchenAI bridges this gap by transforming your AI Jupyter notebooks into production-ready API server in minutes.
But why??
- Shorter Development Cycles
Test, iterate, and deploy AI techniques faster and cut the feedback loop in half.
- Vendor and Framework Agnostic
Use the libraries you’re comfortable with, no lock-ins.
- Plugin Architecture
Extend functionality with plugins for evaluation frameworks, observability, prompt management, and more.
- Open Source and Local-First
Built on trusted technologies like Django, so you stay in control—no 3rd-party dependencies required.
- Docker-Ready
Share your API server as a lightweight container for easy collaboration.
We’ve released KitchenAI as an Apache-licensed open-source tool, so anyone can use it.
❗ Up next: a managed cloud version with deeper integrations, metrics, analytics, and workflows for teams with more complex needs. One short term goal is to go straight from Colab to a KitchenAI cloud hosted API so development can be absolutely seamless.
I’d love your feedback, and if you find it interesting, your support with a like or comment on Product Hunt would mean a lot!
Check it out here: https://lnkd.in/ePjnXw4U
Thanks for your support and for helping spread the word! 🙏
📢 The AI Monitor - Your Source for the Latest AI Tech updates! 🚀
Welcome back to another edition of The AI Monitor, where we bring you the most exciting developments in the world of AI technology. In this issue, we'll highlight two intriguing GitHub projects - NanmiCoder/MediaCrawler and slint-ui/slint. 💻✨
1️⃣ NanmiCoder/MediaCrawler: Harness the Power of Web Scraping! 🕷️
Looking to gather valuable insights from social media platforms? NanmiCoder/MediaCrawler is your go-to tool! Whether you're interested in scraping comments from popular platforms like Red Book, Douyin, and Bilibili, or gathering data from Weibo and more, this open-source project has got you covered. Stay ahead of the game and uncover hidden trends and patterns that can give you a competitive edge. 📈🔎
2️⃣ slint-ui/slint: Declarative GUI Toolkit for All Your Development Needs! 🖥️
Building native user interfaces has never been easier! With slint-ui/slint, you can create beautiful and seamless UIs for your Rust, C++, or JavaScript applications. This declarative GUI toolkit takes the hassle out of UI development, allowing you to focus on creating engaging experiences for your users. Whether you're a seasoned developer or just starting, slint is here to simplify and streamline your UI development process. 🎨✨
That's it for this edition of The AI Monitor. Stay tuned for more exciting AI news and updates in our upcoming newsletters. Be sure to follow our blog and subscribe to our newsletter to never miss an update!
Click the link to learn more and unleash the power of web scraping and create seamless UIs:
https://lnkd.in/e6EvKwt6
Until next time, stay curious and keep innovating! 🚀🤖
-The LangLabs Team
🚀 Curious about Langchain but not sure what it does? Let’s simplify it!
Langchain is a powerhouse framework that’s changing the way we build with Large Language Models (LLMs). Whether it’s chatbots, smart apps, or AI workflows, Langchain has the tools to make it happen—easily and efficiently.
🛠️ What’s Langchain all about? Langchain gives developers the ability to build smarter AI applications by managing memory, handling complex workflows, and connecting to various data sources.
Think about it: you want to create a chatbot that remembers past conversations, pulls real-time data, and gives smart, engaging responses. Sounds like a huge task, right? Langchain makes it simple!
Why Langchain is a game changer:
Pre-built Components: Memory tools and data source connections right out of the box.
Integrates Seamlessly: Want to pull from APIs, query a database, or crunch some numbers? Langchain connects your LLMs with ease.
🔑 What makes Langchain powerful:
Memory Management: No more forgotten conversations—Langchain helps your AI remember what users said earlier and respond accordingly.
Use External Tools: Whether it’s accessing databases, APIs, or doing calculations, Langchain brings versatility to your LLM projects.
What would YOU build with Langchain? Let’s talk about it! 👇
#LLM#AI#Langchain#AIInnovation#ConversationalAI#MachineLearning
Getting Started: Langchain is open-source and has great documentation.
You can start building right away, even if you’re just experimenting with LLMs.
Check it out here: Langchain GitHub:
I’ve stumbled across Bee Agent Framework, and it’s been an impressive experience. If you’re exploring agentic AI workflows, this framework deserves your attention.
What struck me the most is how intuitive and modular it is. Perfect for building intelligent agents that can handle complex tasks, and its almost production ready. Workflows felt natural, and has a clean architecture. Best of all it’s open-source, so there’s plenty of room to explore, adapt, and contribute to its community. Its OpenAI compatible or BYO(Model).
Highly recommend giving it a try.
npm install bee-agent-framework to get started
Would love to hear your thoughts if you’ve used it! Let’s compare notes. 🐝
https://lnkd.in/eTQH7Zir#AI#AgentFramework#BeeAgentFramework#AIWorkflows#OpenSource#SmartAgents#AgenticAI
🌟 Introducing PersonaGPT: Unlock the Power of Personality-Driven AI Development!
Are you looking for a robust starting point to build interactive, personalized AI applications?
🚀 Meet PersonaGPT, an open-source template designed to make personality-driven AI development effortless. Whether you're a beginner exploring AI or a seasoned developer, this project provides the tools you need to create engaging, insightful applications.
What is PersonaGPT?
PersonaGPT is a template repository packed with powerful features, including:
🎯 A 4-Step Self-Discovery Test to engage users
🤖 Personalized analysis powered by OpenAI's GPT-4
🌏 Multi-language support (English/Korean) for global accessibility
📱 Responsive design for seamless user experiences across devices
💾 Result image download for easy sharing
📊 Data analytics integration with AWS DynamoDB
Tech Stack Highlights
Frontend: React, Material-UI, TypeScript, i18next
Backend: AWS Lambda, OpenAI API, DynamoDB
Infrastructure: Vercel + AWS
This stack ensures flexibility, scalability, and ease of deployment, so you can focus on building impactful features.
Why PersonaGPT?
📌 Save time: Get started with a ready-to-use structure for AI projects.
🛠️ Customizable: Refine prompts and adjust the application to match your unique goals.
🌟 Impactful design: Leverage modern UI/UX and localization for a truly global application.
Get Started Today!
Check out the repository, explore the documentation, and start building:
👉 GitHub: https://lnkd.in/giSRr6UH
Let's build something amazing together! 💡
#AI#OpenSource#PersonaGPT#AIDevelopmentAi CommunityAI TECH PLUS
Co-founder @ Daily Dose of Data Science (120k readers) | Follow to learn about Data Science, Machine Learning Engineering, and best practices in the field.
Here's how you can save 1000s of dollars in building AI copilots:
It can take from weeks to months to build copilots from scratch.
CopilotKit simplifies this.
CopilotKit on GitHub: https://lnkd.in/dkKkM_rT.
It is a low-code open-source toolkit primarily dedicated to building AI copilots.
It provides all functionalities that facilitate AI integration like:
- Collecting data from the app to send to the model.
- Managing any third-party interactions an app may involve.
- Performing actions to display the response received from the model.
- Making updates to the application’s backend/frontend from the model response.
Recently, CopilotKit v1.0 was released with many upgrades. Here are a few key updates that will make it much easier to build copilots.
1) Generative UI
- AI copilots typically demand some visual engagement.
- For instance, when a user interacts, it should dynamically generate visual components. This includes charts, interactive widgets, etc., that are specific to the user's context.
- CopilotKit v1.0 comes with these capabilities, and a demo is shown below:
2) Rebuilt with GraphQL
- Earlier versions of CopilotKit utilized simple LLM REST API calls. In the latest release, the communication protocol has been redone with GraphQL.
- With GraphQL, the copilot engine can handle typed and dedicated input fields. Moreover, it also helps in returning various copilot-specific typed and dedicated output fields.
3) Improved React Hooks
CopilotKit has various react hooks for providing app-state-related information to the Copilot engine. These details can then be sent to the AI model. There are a few updates here:
- useCopilotAction hook to invoke actions by AI based on the context of a chat.
- useCopilotChatSuggestions hook dynamically generates chat suggestions based on the app state.
Other than these three cool upgrades, CopilotKit v1.0 also comes with a managed Copilot Cloud (currently in beta).
I summarized these updates in more detail in yesterday's issue of Daily Dose of Data Science. The newsletter is linked in the comments.
Thanks to CopilotKit for collaborating with me today.
👉 Over to you: What are some other challenges of building AI copilots?
Software Engineer | Generative AI, Large Language Models (LLM), Statistical Analysis |
1moWe are truly grateful to be in a generation where top-notch content is at our fingertips, empowering us to upskill ourselves at no cost, Thankyou so much for this