AnswerRocket's Co-Founders Michael Finley and Pete Reilly shared how they created the perfect solution for two critical challenges in leveraging AI for high-integrity business insights: #DataSecurity and #Hallucinations.
🔒 On Data Security
Pete emphasized “We provide information traceable all the way to the SQL query, making the insights auditable and fully reliable.” AnswerRocket ensures that data flowing to language models isn’t stored or used outside its intended purpose, preventing leaks and protecting sensitive information.
🎯 Preventing Hallucinations
Mike explained, “We pose questions with all the supporting facts so the model doesn’t need to hallucinate.” Additionally, AnswerRocket grades AI responses, validating every number and its source before presenting it to users. “Demonstration is not value creation. What’s hard is making enterprise solutions of high integrity,” he added.
AnswerRocket combines security, transparency, and accuracy to create #EnterpriseReady AI solutions that empower leaders to make informed decisions with confidence. Learn more about how we have eliminated data leaks and hallucinations at the link in the comments below!
So I would start simply by saying that the idea of keeping data secure and providing answers that are of high integrity is table stakes for an enterprise provider, making sure that users who should not have access to data don't have access to it and that the data is never leaked out, right? That's table stakes for any software at the enterprise. And so it doesn't change with the advent of the AI technology. So answer Rocket. Is very focused on ensuring that that data flowing from the database to the to the models, whether it's the open AM models or other models of our own, does not result in anything being trained or saved so that it could be used by some third party. So it could be leaked out so that it could be taken advantage of in any way other than its intended purpose. So that's, that's a core part of of what we offer. The, the flip side of that is, as you mentioned, many of these models are sort of famous. At this point for producing hallucinations where when you under specify what you ask the model, you don't give it enough information, it fills in the blanks. That's what it does. It's generative, right? The G in generative is what makes it want to fill in these blanks. Answer Rocket takes 2 steps to ensure that that that doesn't happen. First of all, when we pose a question to the language model, we ensure that the facts supporting that question are all present. It doesn't need to hallucinate any facts because we're only giving you questions that we have the factual level answers for so that it can. They can make a conversational reply. The second thing we do is when we get that conversational reply, like a good teacher, we're grading it. We're going through checking every number. What is the source of that number? Is that one of the numbers that was provided? Is it used in the correct way? And if so, we allow that to flow through. And if not, we never show that to the user so they never see it. A demonstration is not value creation, right? A lot of companies that just kind of learned about this tech are out and out there demonstrating some cool stuff. Well, it's really easy to make amazing demonstrations out of. These language models, what's really hard is to make enterprise solutions that are of high integrity that meet all of the regulatory compliance requirements that provide value by building on what your knowledge workers are doing and making them do a better job still. And so that's very much in the DNA of Answer Rocket and it's in 100% throughout all the work that we do with language models, a lot of the fear that you hear people saying, oh, I'm going to have to leave promptly. Leaking data and so on. And a lot of this is coming from ChatGPT. And if you if you go read the terms and conditions of Chatpad, it says, hey, we're gonna use your information, We're gonna use it to train the model and it's out there. So that's where you see a lot of companies really locked down ChatGPT. And based on the terms of conditions, that makes sense. But when you go look at the terms and conditions of say the Open AI API, it does not using your data to train the model. It is not widely available. Even anybody inside of the company, it's removed in 30 days and and so on. So those are much more restrictive and much more along the lines of what I think a large enterprise is going to expect. You can go to another level. And I think a lot of our what we're seeing a lot of our customers, they do a lot of business with say Microsoft. And so Microsoft also can post that model and inside the same environment that you're hosting all your other corporate data and so on. So it really has that same level of security that if you. Just, you know, say Microsoft, for example, the host your corporate enterprise data, well then really trusting them to sort of host the open AI model is really on that same level. And what we're seeing is large enterprises are getting getting comfortable with that. And in terms of hallucinations, as Mike said, it's really just important how we use it. We analyze the data, we produce facts and we tell and there are settings in these large language models that tell it how creative to get or not. And you say don't get creative. I just want the facts would give me a good business story about. What is happening? And then we also provide information to the user that tells them exactly where that information came from, traceable all the way down to the database, traceable all the way down to the SQL query, so that it's completely auditable and in terms of where the data came from and being able to trust it.
Get more insight on how we leverage the power of ChatGPT with our augmented analytics platform to securely get accurate insights from your data! https://bit.ly/3ALavTm
Get more insight on how we leverage the power of ChatGPT with our augmented analytics platform to securely get accurate insights from your data! https://bit.ly/3ALavTm