Shubham Saurabh’s Post

View profile for Shubham Saurabh, graphic

Founder & CEO | Real-time Website Performance Monitoring 🚀 | Increasing Social Ads Conversion Rates via by-passing in-app browsers | Auditzy | Jamsfy | InApp Redirect

Fine-tuning a large language model (LLM) is incredibly enjoyable, especially when you're refining it to avoid negative prompts and instead provide default, self-learned answers. However, these models often inadvertently reveal that they are not supposed to discuss certain topics, which is fascinating! 😅 I have started exploring AI space, and the more I am diving deeper, I can clearly sense that AI with custom data is the way forward for the businesses down the lane, RAG (Retrieval-Augmented Generation), is certainly gonna help. But what about finetuning? Imagine the future: in the next 5 years, prompt engineering could become a major focus for many professionals. The real challenge lies in fine-tuning these models, despite their parallel self-training capabilities. Ensuring that your model doesn’t respond to irrelevant queries is tough, as prompts can vary widely. This is where smart negative prompt engineering and thorough verification come into play, potentially opening up significant opportunities for engineers in this emerging field. Thoughts are welcome, this post is my personal POV. #ailearningdays #llms #finetuning

VIKASH 👋

I help your business succeed | CEO @ SparxIT | Entrepreneur

5mo

You have explained it aptly. Training an AI model with the data available is one thing, but fine-tuning is another world altogether and incredibly interesting. Based on my personal experience with the chatbot product we are building, I can say that fine-tuning and better prompting are crucial. These elements have been key in enhancing our chatbots' performance. Fine-tuning and better prompting will be crucial in the future—in fact, they are right now.

To view or add a comment, sign in

Explore topics