We want to give a huge shout out to Jet Wu, senior at CMU, who continued his PennApps 2024 project to build LlamaSim! LlamaSim is a multi-LLM framework that simulates human behavior at scale. Jet's project uses Cerebras Systems Inference to create 100's of LLM-agents that simulate the thinking of different demographics. Awesome work Jet! You can check his hack out at mlh.link/LlamaSim and on GitHub at https://lnkd.in/eB9yKASF 👏
Major League Hacking’s Post
More Relevant Posts
-
Check this interesting blog that explores the intricacies of constructing an efficient face recognition system, highlighting critical aspects such as data quality, algorithm selection, and real-world application challenges. It highlights the importance of high-resolution datasets and diverse image collections to train robust models, ensuring accuracy across varied demographics. The discussion on algorithmic frameworks illustrates the balance between computational efficiency and recognition precision, which was found quite interesting. Check this blog and share your thoughts on face recognition and technology tailored to it. Looking forward to having an interesting conversation in the comments. #facerecognition #technology #Cyient
Building an Efficient Face Recognition System
cyient.com
To view or add a comment, sign in
-
Over the past 6 weeks, we've seen a decline in AI usage by US businesses, as measured by the U.S. Census Bureau's incredibly Business Trends & Outlook Survey (BTOS). If you're working at a business that's currently using AI or planning to do so soon, it's part of a small minority! About 4.2% of US businesses said they'd recently used AI, down from 5.4% 6 weeks earlier and up from 3.7% last fall. We just don't have that much history with this survey, so this is probably just noise. One thing that hasn't budged much since survey collection began is the share of US businesses planning to use AI in the next 6 months. It's holding steady right around 6.5%.
To view or add a comment, sign in
-
-
Here’s a simple idea for AI. What if you created a simple mobile survey platform that “learned” from each respondent? Founder Rasto Ivanic’s GroupSolver platform does that. But the consequences and applications go way beyond first expectations. When every respondent can assess others’ inputs you find you have the world’s most robust focus group, faster, cheaper and easier, answered at each respondent’s convenience. Build enough context, and you discover that this body of knowledge might equip you to simulate their answers to questions you hadn’t asked. With demographics and emerging taxonomies, synthetic personas begin to emerge. Simply gathering many human answers to a question enables you to model answers for the next question. Stay tuned. The possibilities are endless. Don C. Kelly Costello Dscout Alin Vana Todd McCullough Bracken Darrell Carmine Di Sibio Dalhousie University Deloitte Insights Accenture Adobe Experience Manager Implementation Alvarez & Marsal IDEO IBM Hyatt Hotels Development EAME Andrew Gritzbaugh Michael Broley Myra BrandingBusiness OTTO Brand Lab David Kohler Karalee Close Jennifer LaPlante Dan Shaw Sobey School of Business at Saint Mary's University Julia Knox Kevin Stoddart, MBA Bill Gusmano Michael White Alison Kay Brandi Dixon Kim Norton Alyssa Mayo Criswell Lappin Jeffrey Saviano
Do you still need humans to answer your surveys, or can Gen AI do it all for you? The short answer: not yet, but it does have a place in market research today. Not too long ago, we shared insights from a #GenZ study completed by an all-human panel. We wanted to compare if we would get similar responses (or better) with a #GenAI synthetic panel resembling the demographics of this audience. We found that when it came to close-ended questions, the synthetic respondents did decently well. Where they mainly lacked, however, was in responding to open-ended questions-- they simply missed the nuances and depths that humans provide. So what does this mean? Perhaps AI cannot quite replace human panels at the moment, but the efficiencies can prove to be useful in cases such as testing surveys. Take a look at what we found in detail: https://lnkd.in/eW4YjtAk #marketresearch #AI #data #surveys
To view or add a comment, sign in
-
-
How have technologies changed the way we look at customer research? Big data has given us more information to work with, allowing us to see trends, causal relationships, and patterns that develop among similar groups and demographics. In addition, AI has made it possible to streamline the analysis of massive data pools and take the workload off your team. The result is new approach to research that's not only more accurate, but more efficient. We want to start a discussion - what are your thoughts on technologies like big data and AI in research? How do they help? What other technologies impact the global research sector? Share your thoughts. #research #technology #AI #bigdata
To view or add a comment, sign in
-
-
🚀 𝗗𝗮𝘆 𝟱 𝗼𝗳 #30DaysOfFLCode 𝘄𝗶𝘁𝗵 OpenMined: 𝗦𝘆𝗳𝘁 𝗮𝗻𝗱 𝗙𝗹𝗼𝘄𝗲𝗿 𝗧𝘂𝘁𝗼𝗿𝗶𝗮𝗹 🌻 Today, I wanted to get my hands dirty with some code!! 1️⃣ Started off with the blog by OpenMined on 𝗦𝘁𝘂𝗱𝘆𝗶𝗻𝗴 𝗛𝗲𝗮𝗿𝘁 𝗗𝗶𝘀𝗲𝗮𝘀𝗲𝘀 𝘄𝗶𝘁𝗵 𝗣𝘆𝗦𝘆𝗳𝘁: https://lnkd.in/gFyCFh5K And the GitHub repo: https://lnkd.in/gyH5SpFR The tutorial helps us play with the UCI Heart Disease Dataset. The repo contains 6 notebooks to understand FL in depth. Unfortunately, I was not able to complete the tutorial due to some errors 😌 but on it... So, I went on to try out the tutorial notebook by Flower Labs. 2️⃣ Flower Labs provide a cool tutorial on CIFAR-10 dataset, where we first load the data, run a centralized CNN on PyTorch and finally run the FL setting on Flower. Great tutorial 😍😍 𝗖𝗵𝗲𝗰𝗸 𝘁𝗵𝗲 𝘁𝘂𝘁𝗼𝗿𝗶𝗮𝗹 𝗯𝗲𝗹𝗼𝘄: Get started with Flower: https://lnkd.in/gK74rp8a I will be posting a running Kaggle notebook soon. Also continued reading the paper on The future of digital health with federated learning. If you’re curious about FL, follow my journey for daily updates, tutorials, and tips. Let’s learn together and push the boundaries of privacy-preserving AI! Check out the #30DaysOfFLCode webpage to join the challenge: https://lnkd.in/dF42iJf2 #FederatedLearning #MachineLearning #AI #PrivacyPreservingAI #LearningInPublic #30DaysOfFLCode #digitalhealth #Openmined
Federated Learning for Heart Disease Study with PySyft
openmined.github.io
To view or add a comment, sign in
-
Do you still need humans to answer your surveys, or can Gen AI do it all for you? The short answer: not yet, but it does have a place in market research today. Not too long ago, we shared insights from a #GenZ study completed by an all-human panel. We wanted to compare if we would get similar responses (or better) with a #GenAI synthetic panel resembling the demographics of this audience. We found that when it came to close-ended questions, the synthetic respondents did decently well. Where they mainly lacked, however, was in responding to open-ended questions-- they simply missed the nuances and depths that humans provide. So what does this mean? Perhaps AI cannot quite replace human panels at the moment, but the efficiencies can prove to be useful in cases such as testing surveys. Take a look at what we found in detail: https://lnkd.in/eW4YjtAk #marketresearch #AI #data #surveys
To view or add a comment, sign in
-
-
ICYMI: we compared survey results from a real, human Gen-Z panel with an AI synthetic panel created to mirror Gen-Zers. What we found in terms of quality of the synthetic responses is not to be overlooked 👀 Check it out, and if you would like the full detailed report, comment down below!
Do you still need humans to answer your surveys, or can Gen AI do it all for you? The short answer: not yet, but it does have a place in market research today. Not too long ago, we shared insights from a #GenZ study completed by an all-human panel. We wanted to compare if we would get similar responses (or better) with a #GenAI synthetic panel resembling the demographics of this audience. We found that when it came to close-ended questions, the synthetic respondents did decently well. Where they mainly lacked, however, was in responding to open-ended questions-- they simply missed the nuances and depths that humans provide. So what does this mean? Perhaps AI cannot quite replace human panels at the moment, but the efficiencies can prove to be useful in cases such as testing surveys. Take a look at what we found in detail: https://lnkd.in/eW4YjtAk #marketresearch #AI #data #surveys
To view or add a comment, sign in
-
-
What would happen if we let #GenAI - a.k.a. synthetic respondents - take one of our surveys? How would their answers compare to those of humans? We were curious, so we did just that. Here is what we found: 1. Synthetics answer simple questions reasonably well... but they can get tricked 2. They struggle with open-ended questions beyond the most obvious answers 3. The depth and breadth of answers is lacking in general So, is the concept of synthetic respondents useless? No, we don't believe so. But it is best to think of data from synthetic respondents as results of a trained model simulation rather than a way of discovering new, uncovered consumer truths. The technology will surely evolve and improve with time, and that will allow for its use cases to expand. For now, however, it is best to use this technology for survey testing, querying of existing data or hypothesis building rather than relying on it to gain consumer insights and make business decisions. Read more in our blog 👇 and if you would like to go in more depth, let us know, and we will share be happy to share our data with you!
Do you still need humans to answer your surveys, or can Gen AI do it all for you? The short answer: not yet, but it does have a place in market research today. Not too long ago, we shared insights from a #GenZ study completed by an all-human panel. We wanted to compare if we would get similar responses (or better) with a #GenAI synthetic panel resembling the demographics of this audience. We found that when it came to close-ended questions, the synthetic respondents did decently well. Where they mainly lacked, however, was in responding to open-ended questions-- they simply missed the nuances and depths that humans provide. So what does this mean? Perhaps AI cannot quite replace human panels at the moment, but the efficiencies can prove to be useful in cases such as testing surveys. Take a look at what we found in detail: https://lnkd.in/eW4YjtAk #marketresearch #AI #data #surveys
To view or add a comment, sign in
-
-
Can AI replace humans as survey respondents? What are the implications for market research agencies, brands, panel providers, etc.? Check out the post below for some findings from our recent case study delving into the role of synthetic vs. human respondents in surveys. #marketresearch #surveys #GenAI
Do you still need humans to answer your surveys, or can Gen AI do it all for you? The short answer: not yet, but it does have a place in market research today. Not too long ago, we shared insights from a #GenZ study completed by an all-human panel. We wanted to compare if we would get similar responses (or better) with a #GenAI synthetic panel resembling the demographics of this audience. We found that when it came to close-ended questions, the synthetic respondents did decently well. Where they mainly lacked, however, was in responding to open-ended questions-- they simply missed the nuances and depths that humans provide. So what does this mean? Perhaps AI cannot quite replace human panels at the moment, but the efficiencies can prove to be useful in cases such as testing surveys. Take a look at what we found in detail: https://lnkd.in/eW4YjtAk #marketresearch #AI #data #surveys
To view or add a comment, sign in
-