A Beginner’s Guide to Building a Full-Stack Agent 🤖 We love this tutorial by Mervin Praison ✅ that gives a step-by-step overview on how to build the core components of an agent to build a simple application, using local models (Ollama) and Chainlit by Literal AI. Define the tools, plug it into an agent reasoning loop, and wrap it with a simple user interface! https://lnkd.in/gvUm8wta
One of my favourite GenAI Guru on YouTube. llamaindex is very fast and good for RAG. But We need to add some more parser for language support (now py,CPP,js etc limited) in codesplitter LlamaIndex as part of RAG over code and identify the hierarchy of code and impacted area for change request
Can we customize a React agent to enforce tool usage as what I have seen even if it's a enterprise knowledge base but it's more likely a general understanding also, agent may choose not to use internal data at all for response. This is not intended as correct tool usage should be the foremost task
I have been following him for a long time, and all of his content and explanations are golden. He even made his own agentic library. Worth going through his videos.
Enga thalaivan mass🔥🔥 Mervin Praison ✅ expecting more in ai✅
Love this
Professor emeritus at Youngstown State University
5moHi you create excellent content. I have used ollama. It works as advertised, but on a regular PC, it's really slow. On my vanilla PC, ollama llama 3 combo took 7 minutes to get an answer. I don't think I can run an agentic rag locally with reasonable speed. You guys who use ollama, should also tell about the hardware specs, like cpu/gpu, ram/vram, and hd etc. I use Google colab pro exclusively to get access to a100 gpu, if needed. Since colab is a cloud based ide, running ollama requires tunneling, not worth the effort. For now I will stay with the cloud having access to different gpu, instead of buying one to run ollama locally.