Over the last couple of weeks, you may have seen us launching MinusX on YC, Hacker News, Product Hunt, and a whole lot of other places. Now that there's a bit of calm (read as dealing with only a handful raging fires), here's a bit more context on the problem we're working on, our take on the solution surface, and why we're excited to be building MinusX. Getting, analyzing, and communicating accurate data is a struggle in orgs of all sizes and shapes. AI native solutions that require massive upfront costs and re-learning workflows are a non starter. We think smart AI agents should just work in tools you already use, and love. MinusX is exactly this. It just works. Isn't this what a smart colleague would do? Read more below, and try us out! https://lnkd.in/gdFcCGxF
MinusX’s Post
More Relevant Posts
-
Let's talk about Devin. This one will be a bit of a hot take, sorry in advance. As usually happens whenever I visit Chip Huyen's Discord - there was some interesting chatter around Devin, specifically a recent YCombinator post about a recent video - linked in the comments #linkedinstyle - that pokes a lot of well thought out holes in the "astounding" Devin demo. It got me thinking, and then I got to digging, and then I got to being flabbergasted by the reaction to this whole thing. Devin is a cool thing to have built, but the kinds of claims related to it being revolutionary or incredible or magical are completely overstated. 1) Devin is not coming for your job, Devin can't even help its creators make a website. (While this is mostly just a joke, I'll link a "funny" video about this in the comments as well) 2) Devin isn't the first AI Software Engineer - people have been hacking on projects like Devin since Baby AGI was released. Devin, is, however - the best marketed AI Software Engineer. Devin is just GPT-4 wearing a very cool hat, and though it looks super rad - that's all it is. I want to be clear, that I don't hold any ill-will toward Devin, or the incredible team at Cognition that created the demo - the tool, I'm sure, will be cool. The demo, undeniably, was a masterclass in marketing. The team is, without question, filled with talented people. This field is advancing at a rate that can't be described - but we sometimes need to remember it's still in its infancy, and we're still at risk of being blown away without checking who, or what, is behind the curtain.
To view or add a comment, sign in
-
New Post: How Maven’s AI-run ‘serendipity network’ can make social media interesting again - https://lnkd.in/g4y7dpBG - Everything in society can feel geared toward optimization – whether that’s standardized testing or artificial intelligence algorithms. We’re taught to know what outcome you want to achieve, and find the path towards getting there. Kenneth Stanley, a former OpenAI researcher and co-founder of a new social media platform called Maven, has been preaching for years © 2024 TechCrunch. All rights reserved. For personal use only. - #news #business #world -------------------------------------------------- Download: Stupid Simple CMS - https://lnkd.in/g4y9XFgR -------------------------------------------------- or download at SourceForge - https://lnkd.in/gNqB7dnp
How Maven’s AI-run ‘serendipity network’ can make social media interesting again
shipwr3ck.com
To view or add a comment, sign in
-
OOOOH THE AI SHADE!! (if you want to skip the gossip and go straight to the updates, scroll down) These Midjourney weekly OFFICE HOURS UPDATES with the founder are WILD AF! If you didn't know, each week at 3pm ET, Midjourney's founder David Holz does kind of a "state of the union" update. It's a great way to keep up with changes and innovations. On today's call, David didn't mince words as he called out the folks at BlackForest (creators of Flux.1) for trying to scrape the MJ servers last year to steal images (I'm assuming they wanted them for training data). If you recall the great MJ outage, apparently, it was due to the attempted thievery caused MJ servers to crash for a few hours. David stated that the image thieves were working with Stable Diffusion at the time (he previously stated this based on the SD email address connected to the wayward account). David went on to say that "most the of the SD team has left to go to the BlackForest Team and work on the Flux.1 project. David's thoughts on the Flux.1 open source platform "Bring on more deep fakes"... (OMG... this is hilarious and shocking). WHY AM I SHARING AI GOSSIP? I think it's important to understand the landscape of AI. It's the wild, wild west and highly competitive. It's import to know who to hitch your AI wagon to. Also, understanding the ethical moves of AI company's is important to me. So far, MJ has moved with the most integrity. While Open Ai (ChatGPT's parent company) and Stable Diffusion (open source image model behind Leonardo and Night Cafe) have all shown some shady money-grubbing tendencies. Ok, that's all I have to say about that. NOW FOR THE UPDATES: --> Other news, the Alpha website will get some updates this week. --> You only need to generate 10 images on Discord to access the Alpha site --> 6.2 will be released soon --> The legacy Midjourney.com will go away once Alpha is open to everyone!! That's it for now! Let's go! Ok, that's it for me, catch you on the flipside --Jai
To view or add a comment, sign in
-
This is a long but good read. Zitron really researched this. Technology is a fascinating thing. To me anyway, always has been. There’s always a danger of people working in Tech to focus on the ‘Thing’. Be that a deployment, migration, problem, KPI or whatever. What I’ve learned over the past 5+ years is it’s essential to balance Technology and Business. Sure we talk about it all the time, but talk costs and budgets and feet still shuffle more often than not. “But what is the total cost?” “But how does that add to revenue or remove cost?” (That’s not a slight to anyone, we hire Techs to do the Tech things and Leaders to focus on the rest) I am unashamedly challenging about GenAI. Primarily now because of sustainability and energy. There’s too much outside of Tech stacked against it. Face it - the power grid is not going to be miraculously fixed and upgraded in any country anytime soon. And I will always put Society’s needs before a cool toy. The legal side I understand intellectually and it offends the older me - but ohhhh, back in the day I downloaded so much from Napster. So bit hypocritical. But the main driver has always been the cost/benefit equation just doesn’t work. You can add in environment, energy and legal to that if you want. But even without it. It doesn’t work. “I recognize reading this you might dismiss me as a cynic, or a pessimist, or as someone rooting for the end, and I have taken great pains to explain my hypotheses here in detail without much opinion or editorializing. If you disagree with me, tell me how I’m wrong — explain to me what I’ve missed, show me the holes in my logic or my math.” Exactly. Explain it in detail how it will work without the ‘hopes and prayers’ that the technology will dramatically improve all of a sudden, or the costs dramatically reduce. https://lnkd.in/eyNgSmSp
How Does OpenAI Survive?
wheresyoured.at
To view or add a comment, sign in
-
Understanding Reddit as a data play (both for pre-training and inference) in the AI industry, it’s the right perspective. In fact, Reddit’s vast, candid, and topic-organized content corpus has become a key revenue driver, generating $81.6 million in licensing revenue from AI companies like OpenAI. Its distinct features, including karma points and diverse subreddits, make it ideal for AI training, fueling investor confidence and positioning Reddit within the booming AI ecosystem. How? 1. Data Licensing Growth: Reddit’s decision to charge for its vast 19-year text corpus has transformed AI companies into a major revenue stream, contributing to its first quarterly profit as a public company. 2. Appeal for AI Training: Reddit’s topic-organized, pseudonymous, and candid user-generated content is ideal for training AI models, offering high-quality, conversational data. 3. Revenue Spike: Licensing revenue rose to $81.6 million in the first nine months of 2024, up from $12.3 million a year earlier, showcasing high-margin growth potential. 4. Investor Confidence: Diversification beyond advertising and alignment with the AI boom has doubled Reddit’s stock in the past three months. 5. Unique Features: Voting systems, karma points, and over 100,000 diverse subreddits make Reddit distinct from other platforms, offering more granular quality signals for AI models. 6. Competitive Edge: Unlike rivals like Meta and X, Reddit isn’t building its own AI but stands out by selling its data in a market where supply is finite. 7. AI Tool Development: Reddit is testing its own AI-powered search tool using models from OpenAI and Google, further leveraging its data for user-focused innovations. 8. AI Data Scarcity: With global demand for high-quality data rising, Reddit’s large, text-heavy content corpus positions it as a valuable partner for AI companies. https://lnkd.in/dcBSDSWh
How Years of Reddit Posts Have Made the Company an AI Darling
wsj.com
To view or add a comment, sign in
-
Attention is not all you need. Neither are evals. This is because Al systems are in and of the world--they are socio-technical systems. Purely technical approaches like attention and evals can't address complex opportunities and risks over long time scales and across cultures related to human-AI configuration, perpetuation of sociological biases, supply chains, data privacy, information security, intellectual property, human displacement, content homogenization and more. We'll need socio-technical approaches that embrace the complexity of real-world AI interactions to drive the burgeoning science of Al measurement forward. This is why I'm so excited about Humane Intelligence’s first in a series of algorithmic bias bounties. https://lnkd.in/eTCGjVB9 Bias bounties mix methods from information security (bug bounties), with quantitative bias testing, and structured human feedback to assess, document, and improve troublesome performance and sociological bias issues in AI systems. Please consider participating in the bounty. There are prizes! P.S. To learn a bit about the history of bias bounties, check out the inaugural Twitter algorithmic bias bounty: https://lnkd.in/eV9apPmn ... from the halcyon days of actual Twitter. (RIP) cc: Dr. Rumman Chowdhury, Theodora Skeadas #AI #infosec #bugbounty
To view or add a comment, sign in
-
We at HumaneIntelligence had an incredible event yesterday evening, partnering with All Tech Is Human, where I proudly serve as an adviser, to organize a hackathon around our recently launched algorithmic bias bounty program, the first of ten. With the support of Google.org, we are building themed programs that aim to foster community and professionalize the practice of algorithmic assessment. To kick off this dynamic event (which had 500 in person sign ups!), Dr. Rumman Chowdhury moderated a fireside chat with Jiahao Chen. Then, I moderated a panel on AI governance, crowdsourcing, and community-based responses, Charvi Rastogi, Chloe Myshel Burke, Dr. Sasha Luccioni, and Swapneel Mehta. We discussed questions including: 1️⃣ What aspects of algorithmic assessment are least well-understood 2️⃣ How socio-technical algorithmic assessments differ from technical ones 3️⃣ The climate impact of AI models 4️⃣ The role of civil society and academia in the ecosystem of model evaluation 5️⃣ Challenges and rewards in translating the work of internal and external auditors and assessors to policy, legal, and compliance functions 6️⃣ Advice to those someone starting out in algorithmic auditing Then, hackathon participants delved into this first bias bounty challenge. This first challenge involves creating a probability estimation model that determines whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection, inspired by the data from the DEF CON Generative AI Red Teaming Challenge. This challenge closes on Monday, July 15. All are welcome to participate! For those just getting started, we have three short videos with Kristi Arbogast to help you on your journey (links in the comments below): 1️⃣ In the first tutorial video, "Downloading the Dataset," Kristi covers topics like where to access the challenge information, how to download datasets, and how to import the data into a Jupyter notebook. By the end, you'll be ready to inspect the data and start your AI auditing journey! 2️⃣ In the second tutorial video, "Inspecting the Data," Kristi covers how to inspect the data through various functions, how to filter by multiple criteria, and how to easily view the full conversations text in Hugging Face. By the end, you’ll know how to conduct basic analysis to prepare for the bias bounty challenge. 3️⃣ In the third tutorial video, "Creating Categories," Kristi covers how to use spaCy, a Python library, to analyze the text from our filtered DataFrames and then provide step-by-step code on how to extract the Named Entities from each conversation. #ArtificialIntelligence #GenerativeAI #Hackathon #BiasBounty #AlgorithmicBiasBounty #RedTeaming Dr. Rumman Chowdhury, Jonathan Jin, Sara Kingsley, Tajia Harlem, Maria di Fonzo, Nicole O., Ayşegül Güzel, Manish Kumar, Suriya Krishna.B.S, Fariza Rashid, Ryan Nichols, David Ryan Polgar, Rebekah Tweed, Sandra Khalil, Elisa Fox, and Josh Chapdelaine
To view or add a comment, sign in
-
Great list of AI use cases at AI-forward tech companies. Those seeking to apply AI in their own business should first learn about similar use-cases, why they used AI, what value AI delivered, and how it was done.
17 new engineering articles from Big Tech worth reading to improve your ML system design: 1. Uber: Optimizing LLM training - https://lnkd.in/g5Qr6_eY 2. Netflix: Recommending for Long-term satisfaction - https://lnkd.in/g7FJg5yK 3. Linkedin: RecSys -https://lnkd.in/gnBnGvTd 4. Discord: Rapid GenAI development - https://lnkd.in/gNb6cEVA 5. Pinterest: Ad ranking - https://lnkd.in/g7njuVn8 6. Instacart: Fraud Detection - https://lnkd.in/gfK-NEas 7. GoDaddy: Classify support tickets w LLMs - https://lnkd.in/g7BYKfi8 8. Gitlab: LLM-powered features - https://lnkd.in/ghVFmSBx 9. Goldman Sachs: NLP to improve PRs - https://lnkd.in/gnYgkZiw 10. Target: Recommender System - https://lnkd.in/gZPzWrw9 11. Ebay: Developer Productivity w LLMs - https://lnkd.in/gqAE9rqF 12. Replit: Fine-tuning LLMs for code repairs - https://lnkd.in/gti_PSfx 13. Linkedin: Suggesting new connections - https://lnkd.in/g7bj3-aY 14. Canva: Detect related groups of objects - https://lnkd.in/gaAuRYW6 15. Yelp: Detect inappropriate video content - https://lnkd.in/gZN9vR_p 16. Nvidia: Detect software vulnerabilities - https://lnkd.in/gsdFTBzk 17. Grammarly: Detect delicate text - https://lnkd.in/gsVdM89k ... I personally enjoyed learning about Discord's framework on "Developing rapidly with GenAI". ... If you find these helpful... 👍 React ♻️ Share 💬 Comment So more people can learn. #machinelearning #systemdesign
To view or add a comment, sign in
-
Exciting developments in the area of Artificial Intelligence (AI)! 🤖 Leveraging AI to enhance customer support is a game-changer, yet integrating it into product features can pose significant challenges, especially regarding data collection and security. With the advent of ChatGPT, Gemma, and OpenAI, Large Language Models (LLMs) are taking remarkable strides in addressing customer issues. In my recent exploration, I've delved into various techniques, and one that stands out is www.llamaindex.ai This tool enables developers to augment pretrained models with their data, yielding promising results. Here's how llamaindex.ai addresses key concerns for developers: 1️⃣ Cost-effective Training: Training LLMs from scratch can be prohibitively expensive. However, llamaindex.ai offers a cost-effective solution by allowing developers to leverage existing models and augment them with their data. 2️⃣ Continuous Learning: Keeping LLMs updated with the latest information is challenging due to training costs. With llamaindex.ai, updates are seamlessly integrated, ensuring that models are always up-to-date without incurring significant expenses. 3️⃣ Enhanced Observability: Traditional fine-tuning approaches lack observability, making it unclear how LLMs arrive at their answers. However, llamaindex.ai provides visibility into retrieved documents, offering insights into the decision-making process. To further enhance text generation accuracy and relevance, developers can employ the Context Augmentation pattern, also known as Retrieval Augmented Generation (RAG). This approach involves: 1️⃣ Information Retrieval: Retrieve relevant data from various sources. 2️⃣ Context Enrichment: Incorporate retrieved information into the question context. 3️⃣ LLM Interaction: Prompt the LLM to generate responses based on the enriched context. By leveraging RAG, developers can overcome the limitations of traditional fine-tuning approaches: 1️⃣ Cost-effectiveness: RAG eliminates the need for extensive training, reducing costs. 2️⃣ Real-time Updates: Data retrieval occurs on-demand, ensuring that responses are always current. 3️⃣ Improved Observability: Developers gain insights into the retrieved documents, enhancing transparency and understanding. Embracing tools like llamaindex.ai and methodologies like RAG empowers developers to harness the full potential of AI in addressing customer needs while mitigating associated challenges. Let's continue to innovate and revolutionize customer support with AI! 💡 #AI #CustomerSupport #Innovation #LLMs
LlamaIndex, Data Framework for LLM Applications
llamaindex.ai
To view or add a comment, sign in
-
From, Anaconda's latest report: To fully capitalize on the value of open-source AI/ML tools, organizations should embrace innovation and continuous learning. Invest in ongoing training and upskilling of teams to ensure they have the expertise needed to maximize the potential of open-source AI tools. Anaconda can help! Check out our full catalog of educational resources: https://lnkd.in/eFkYMi4X
Announcing Anaconda's new report: The State of Enterprise Open-Source AI 🤖 🚀 Open-source AI is transforming enterprises with unmatched flexibility and innovation—but it also comes with challenges. This latest report, developed in partnership with ETR (Enterprise Technology Research), dives deep into how organizations are navigating the opportunities and risks of open-source AI adoption. Here are some of the key findings: 🔒 50% of open-source AI security vulnerabilities were deemed very or extremely significant—highlighting the need for trusted tools. 💸 Organizations using open-source solutions see 28% lower costs, driving both savings and innovation. 📚 56% of respondents struggle with governing and vetting open-source libraries, signaling a need for stronger processes. 📈 78% expect to see ROI from their AI investments within 18 months, proving that open-source is more than just a cost-effective solution—it’s a competitive advantage. Are you ready to stay ahead in the evolving AI landscape? 📥 Read our blog to learn more and download the full report here 🔗 https://lnkd.in/gBSBfxc2 Many thanks to all involved in developing this report, including Alison Smith, Heather H., Vanesa A., Krissy Ford, Kent Pribbernow, Kodie Dower and more! #AI #opensource #datascience #Python
To view or add a comment, sign in
455 followers