Just got some time out to try llama 3.1 on my machine, oh it is way better than previous models, though other models also works fine but this is ahead of the them. Let's see what OpenAI comes with in GPT5. What I feel about OpenAI is just another name they are having closed Business Model. Just like FRESH Tomato Ketchup, here FRESH is the name, not the feature of ketchup, it's fresh but not fresh. Like ketchup, choose your AI wisely. #GenAI #MockingAI
Anuj Kumar’s Post
More Relevant Posts
-
Yesterday, Anthropic released its Data Analysis tool, and it's another big step in the right direction: a better, cleaner version of what OpenAI released in the past. In this video, I upload 109 typeform responses from our recent AI meetups worldwide, asking Claude to analyse the data and visualise it in a way that best communicates what's going on. It proceeds to - Read the file - Understand what it's about - Pick the most interesting data to talk about - Chooses the most compelling way to visualise it - Builds a dashboard to do so Doing this used to take hours, but now it takes seconds. Tell me again how Generative AI is just hype? :) P.S. I love how Anthropic is starting to clearly show they are the product-led version of OpenAI's research-based approach. When they release something, it's polished, works, and is beautiful, as compared to OpenAI's cutting-edge but often rough-around-the-edges approach. #PracticalAI #FutureOfWork
To view or add a comment, sign in
-
Here’s a tip: If OpenAI’s new model #o1 is too slow for you, you can use gpt-4o to do the first few steps , and then finish up with o1 For example: I like to create high-quality blog articles with AI - which requires tons of processing and 20-30 steps. So I use 4o for this step. I then use o1 in the final step to act like the editor of New York Times and optimize the article for publishing. This gives the best of both worlds.
To view or add a comment, sign in
-
So OpenAI released their next model o1 yesterday. At the onset it looks super impressive 🤯 But before you conclude anything, always remember not to judge any AI system by cherry picked examples. The real picture of how good it is will emerge only over next 1-3-6 months, once the system is subjected to large number of evaluations over a wide variety of tasks and datasets. #ProTip: never judge a AI system by a handfull of examples. Always create evaluation sets rigorously and then benchmark the system on them. That will give you the true picture What do you think of o1? comment below👇 ---------- Follow Gradient Advisors for many more such tips
To view or add a comment, sign in
-
🎵 On the Seventh Day of OpenAI, they gave to us: GPT-4 Turbo APIs! 🚀✨ OpenAI’s GPT-4 Turbo API now includes improved performance and reduced costs, making it more accessible for developers and businesses to scale AI solutions efficiently. These updates are a game-changer for anyone looking to innovate and grow with AI. How will you use GPT-4 Turbo’s capabilities to transform your workflows or products? https://lnkd.in/gHJ9nTgb
To view or add a comment, sign in
-
BREAKING: OpenAI released GPT-4o. And it talks, listens and sees. All at once. Imagine having a conversation with an AI that not only understands everything you say, but also everything you show. Well, maybe not everything. But that's the promise of GPT-4o, the new all-in-one version of GPT-4. And it's fast! It does all of this at a speed that rivals human conversation. We're talking response times as low as 232 milliseconds. This will help make the interactions feel more authentic and less like you're talking to a machine. Which, of course, you do. As usual, they're showcasing a couple of impressive demos. I think that by now we should've learned to take such videos with a grain of sailt. Nevertheless, if one likes it or not, the AI in the video below definitely sounds pretty human-like. And if you only care about the existing core features: the new model is not only faster but also more cost-effective. ↓ Liked this post? Follow the link under my name and never miss a paper highlight again 💡
To view or add a comment, sign in
-
If your content feed has a lot of AI news, then you'll already be reading about the new model from OpenAI called o1-preview (and o1-mini). I thought I'd quickly test it this morning... They are designed to be better at reasoning by considering multiple approaches before answering. This takes longer (and is more expensive) but offers better results on reasoning tasks. Trivial example below from my morning test!
To view or add a comment, sign in
-
Today, I’m excited to share some insights on the groundbreaking release of DeepSeek’s R1-Lite-Preview—a potential king-slayer in the AI landscape that is taking direct aim at giants like OpenAI! What sets R1-Lite-Preview apart? 🤔 1️⃣ Superior Performance: Surpassing benchmarks with 52.5% accuracy on AIME and 91.6% on MATH—setting a new standard for LLMs. 2️⃣ Transparent Reasoning: Unlike traditional models that operate in a "black box," users can now see the logic behind every answer—building trust and empowering users. 3️⃣ Open Source Commitment: By embracing an open-source framework, DeepSeek democratizes high-performance AI, fostering innovation and collaboration across the globe. The most interesting thing about R1-Lite-Preview is that it is produced by DeepSeek, a Chinese platform that is almost on par with OpenAI, Anthropic, and other industry heavyweights who gobble up [comparatively] obscene amounts of money to build their proprietary platforms. As AI continues to evolve, I think that it’s safe to say that R1-Lite-Preview, and indeed DeepSeek, may be a dark horse in the race for AI supremacy. I’ll leave a link in the comments if you want to try it out yourself! #AI #DeepLearning #OpenSource #R1LitePreview #DigitalTransformation #Innovation #Leadership
To view or add a comment, sign in
-
Two days ago, I set out to find a way to build a recommendation system with OpenAI but most of the stuff I found made it more complicated but after a crash course on "Embeddings", I think I have figured it out. Embeddings + Vector DB is the solution. #AI #Vector #Embeddings
To view or add a comment, sign in
-
Your business should never be at the mercy of a single AI provider's decisions. This is where companies like SambaNova Systems become invaluable. Our approach is different. We empower our customers with complete autonomy over their AI technology, from the models they use to the hardware they run on. With SambaNova, you're not just using AI - you're owning it, on a large scale!
OpenAI is pulling the plug on GPT-3.5 next week 🔌 Got some thoughts on this.. Sure, GPT-4o mini is the shiny new toy - cheaper and "better". But what if your app and prompts were built around those quirky GPT-3.5 behaviors? I'm surprised there isn't a GPT 3.5 LTS(Long-term support) version. That would at least give AI teams the time to work on migration and let researchers continue to reproduce results with all the papers that used GPT 3.5 The takeaway? Building your business on someone else's terms is risky. Third-party AI APIs are easier and cheaper to get started, but you might want to rethink them again for mission-critical stuff.
To view or add a comment, sign in
-
If you're interested in self-hosting open-source LLMs for high-throughput production-grade use cases, check out OpenLLM: 1. Open Source framework for serving any LLM as an OpenAI compatible endpoint 2. Fast scaling and scaling-to-zero capability via BentoCloud 3. Optionally use custom security models for input/output classification and filtering 4. Multi-LLM Gateway with customizable routing algorithm Links in comments 👇
OpenAI is pulling the plug on GPT-3.5 next week 🔌 Got some thoughts on this.. Sure, GPT-4o mini is the shiny new toy - cheaper and "better". But what if your app and prompts were built around those quirky GPT-3.5 behaviors? I'm surprised there isn't a GPT 3.5 LTS(Long-term support) version. That would at least give AI teams the time to work on migration and let researchers continue to reproduce results with all the papers that used GPT 3.5 The takeaway? Building your business on someone else's terms is risky. Third-party AI APIs are easier and cheaper to get started, but you might want to rethink them again for mission-critical stuff.
To view or add a comment, sign in