What does Grok think of GPT 4o? The response to GPT-4o seems to vary among users, with some expressing positive impressions and others noting issues or limitations. Here's a summary of the feedback from the X posts: 1. @StefanJuang mentions GPT-4o being fast and free to use, but there's speculation that it might be a smaller model trained on synthetic data by GPT-4, which could affect its performance. 2. @krishnanrohit highlights GPT-4o's speed, noting it's less lazy but also prone to making things up. 3. @maceskridge acknowledges GPT-4o's improvements but points out that it still has issues with hallucination and sophistry, which can be problematic for deployment. 4. @realahart finds no significant change between GPT-4o and GPT-4, suggesting that the improvements may not be immediately apparent. 5. @Barnacules appreciates GPT-4o's lower API cost and faster response speed but notes that it still doesn't provide as good coding answers as expected. 6. @nielsgeo expresses disappointment, feeling that GPT-4o is less intelligent than GPT-4. 7. @mawntee clarifies that GPT-4o is a newer version of GPT-4-Turbo, designed for quicker and more natural responses, but standard issue GPT-4 might still be better for tasks requiring deeper comprehension. Overall, the community seems to have mixed feelings about GPT-4o, with some appreciating its speed and cost-effectiveness, while others are concerned about its accuracy and depth of understanding compared to its predecessor.
Venkat Ramana’s Post
More Relevant Posts
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool
To view or add a comment, sign in
-
🔥 Introducing GPT-4o + LlamaParse 🔥 GPT-4o is the state-of-the-art model for multimodal understanding, meaning it also has state-of-the-art document parsing capabilities. LlamaParse is the platform for enabling LLM-powered parsing - it uses LLMs to extract documents from any file type in a performant, reliable fashion, offering state-of-the-art response quality for advanced document RAG. We’re excited to offer GPT-4o as an explicit option in LlamaParse, which will use GPT-4o for extraction per page into markdown, instead of using our default parsers/models. Why: - GPT-4o is very good at parsing very complex documents into well-formatted markdown. Oftentimes it outperforms our default approaches. - This means that it can turn documents with very complex tables / charts into clean, indexable data for your RAG pipeline - higher response quality, lower hallucinations 📈 Tradeoffs / Caveats ⚠️: - It’s expensive 💵: Due to the cost of inference, using GPT-4o is currently $0.60 USD per page (while by default LlamaParse is $0.003 per page). This cost can spike quickly - beware! - You can specify your OpenAI key, in which case the marginal cost per page goes down to 0.3c per page. - This is a beta feature. Given the cost and latency, use this with caution! If you want to give this a shot, signup for an account and check out our UI: https://lnkd.in/gbkxQAQd Notebook: https://lnkd.in/grwUVr-G
To view or add a comment, sign in
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool
To view or add a comment, sign in
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool Like
To view or add a comment, sign in
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool Like
To view or add a comment, sign in
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool
To view or add a comment, sign in
-
𝐒𝐭𝐚𝐜𝐤𝐑𝐀𝐆 𝐀𝐠𝐞𝐧𝐭: 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐀𝐧𝐬𝐰𝐞𝐫𝐬 𝐰𝐢𝐭𝐡 𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 📘 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐩𝐚𝐩𝐞𝐫 𝐚𝐛𝐨𝐮𝐭? This paper introduces StackRAG, a novel tool that combines the power of Large Language Models (LLMs) with the rich knowledge base of Stack Overflow to provide accurate, relevant, and useful answers to developers' queries. 🤖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬: Combines LLMs like GPT with Stack Overflow data to enhance answer reliability. Utilizes a Retrieval Augmented Generation (RAG) approach for multi-agent generation. Aims to address the limitations of LLMs, such as outdated information and hallucination issues. 🚀 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐛𝐫𝐞𝐚𝐤𝐭𝐡𝐫𝐨𝐮𝐠𝐡? Integrates the extensive and up-to-date knowledge from Stack Overflow with advanced LLM capabilities. Provides more grounded and accurate answers, improving the efficiency of the software development process. Addresses critical issues in using LLMs alone, such as reliability and relevance. 🔬 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬: 📈 𝐈𝐦𝐩𝐫𝐨𝐯𝐞𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: StackRAG offers more accurate answers compared to standalone LLMs. 🌍 𝐔𝐩-𝐭𝐨-𝐝𝐚𝐭𝐞 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧: By leveraging Stack Overflow, it provides the latest solutions and discussions. 📊 Increased Reliability: Reduces the occurrence of hallucinations, ensuring more dependable responses. 🔍 Implications for the Future: 🔧 Enhanced Developer Tools: Potential to revolutionize developer assistance tools by integrating community knowledge with AI. 🧩 Better Resource Utilization: Combines the strengths of community knowledge and AI to provide efficient problem-solving. 🛠 Continuous Improvement: Can evolve with ongoing contributions from the developer community, ensuring relevance and accuracy. 💡 Takeaways: 🌐 Combines AI with Community Knowledge: StackRAG exemplifies the synergy between AI and community-driven platforms like Stack Overflow. 🚗 Enhances Productivity: Provides developers with quick, accurate, and reliable answers, boosting productivity. 📈 Future Potential: Demonstrates the potential of combining AI and community knowledge for various applications beyond software development. 🎯 𝐂𝐨𝐧𝐜𝐥𝐮𝐬𝐢𝐨𝐧: 🔄 StackRAG bridges the gap between AI and community knowledge, offering a robust solution for developer queries. 🌟 It sets the stage for future advancements in AI-assisted tools, ensuring they remain relevant and reliable. For more details, you can explore the tool here: StackRAG Tool
To view or add a comment, sign in
-
Good week in open source AI code generation capabilities. Personally excited to try finetuning a few different small models on the released dataset for code feedback. Additional links in comments below. First - open dataset for code interpreter functionality (basically OpenAI's code execution / data analysis mode in ChatGPT) - https://lnkd.in/gzRaYTDu Second - function calling model from Fireworks AI, compatible with OpenAI's function calling, as a drop-in replacement. https://lnkd.in/g-H5Aqsd
To view or add a comment, sign in
-
This is a glimpse into the future of Documents/PDF parsing. GPT-4o is really good at understanding the content of document and parsing them into a structured form. I think that in a few year we will stop use custom made parser for document and only feed them into a LLM / Multimodal model, and get perfect results. No more OCR, no more custom made parsers... And it's already working pretty well today, although a bit pricey / slow. With this release of LlamaParse we let you try the future of document parsing, you should try it! It can handle complexe chart and table quite well. We are of course still hard working on our 'traditional' parser and will continue to improve it so you can use it until Large Models catch up in cost and efficiency!
🔥 Introducing GPT-4o + LlamaParse 🔥 GPT-4o is the state-of-the-art model for multimodal understanding, meaning it also has state-of-the-art document parsing capabilities. LlamaParse is the platform for enabling LLM-powered parsing - it uses LLMs to extract documents from any file type in a performant, reliable fashion, offering state-of-the-art response quality for advanced document RAG. We’re excited to offer GPT-4o as an explicit option in LlamaParse, which will use GPT-4o for extraction per page into markdown, instead of using our default parsers/models. Why: - GPT-4o is very good at parsing very complex documents into well-formatted markdown. Oftentimes it outperforms our default approaches. - This means that it can turn documents with very complex tables / charts into clean, indexable data for your RAG pipeline - higher response quality, lower hallucinations 📈 Tradeoffs / Caveats ⚠️: - It’s expensive 💵: Due to the cost of inference, using GPT-4o is currently $0.60 USD per page (while by default LlamaParse is $0.003 per page). This cost can spike quickly - beware! - You can specify your OpenAI key, in which case the marginal cost per page goes down to 0.3c per page. - This is a beta feature. Given the cost and latency, use this with caution! If you want to give this a shot, signup for an account and check out our UI: https://lnkd.in/gbkxQAQd Notebook: https://lnkd.in/grwUVr-G
To view or add a comment, sign in
-
Want to automate the writing of a quick earnings note after a morning earnings release but before the earnings call and market open? The latest OpenAI model 'gpt-4-0125-preview' has an improved ability to extract structured data from text according to its documentation. At a practical level this means GPT can now consistently read data in a table in a logical way after some fiddling around (fine-tuning and chunking in LLM terms). I generated fifteen Q4/23 Shopify ($SHOP-NYSE) earnings notes by having GPT read a model's Income Statement, Balance Sheet and Cash Flow Statement, both historical and forecasted figures, and compare them against the earnings and guidance from Shopify's release in a similar way that an earnings note would summarize the release versus expectations. The 15 notes took OpenAI's GPT-4 model about 10 minutes to generate in total and cost under US$4 in credits. Note that my 2024 SHOP estimates were well under consensus. While hallucinations remain a bit of an issue in generating these notes, parallel function calling for this GPT-4 model have opened up its ability to do more than you may have previously thought it could do. Generative ai software is developing at a pace that any generalizations or early conclusions you may have had about its abilities will likely have to change in a few quarters at its current rate of development. Have you budgeted accordingly? Let's get in touch. robin.manson-hing@perspectec.com.
To view or add a comment, sign in