Stardog’s Post

Stardog reposted this

In their new blog, Stardog founders Kendall Clark and Evren Sirin introduce Safety RAG (SRAG): Improving AI Safety by Extending AI's Data Reach. We have had an overwhelming response on how we achieved #AI that is safe, secure and hallucination-free. Kendall and Evren breakdown how organizations can improve #AI safety by extending AI’s data reach. The basic idea of SRAG is to complete GenAI’s reach into enterprise data by unifying enterprise databases into a knowledge graph and then using that knowledge graph with semantic parsing to ground #LLM outputs, thereby eliminating hallucinations. SRAG is fundamentally premised on bootstrapping a #knowledgegraph from databases and then using it to filter hallucinations that occur when LLM extracts knowledge from documents. Enjoy the blog here: https://lnkd.in/eXrButHs Learn more about Voicebox at Stardog.AI. #dataandai #safeai #srag #ai #knowledgegraphs #voicebox #stardogai

Safety RAG: Improving AI Safety by Extending AI's Data Reach | Stardog

Safety RAG: Improving AI Safety by Extending AI's Data Reach | Stardog

stardog.com

Todd N.

CISO, Cyber Security Expert | Board Member | Entrepreneur | MyCredibility

5mo

There are no such thing as Hallucinations in AI.. Ai systems are not alive, or a person, or have this ability. a "hallucination" is simply a bad or wrong answer.

Like
Reply

To view or add a comment, sign in

Explore topics