AESTIMA

AESTIMA

Technology, Information and Media

Blockchain-enabled WEB 3.0 integrated solution for autistic individuals and their caregivers

About us

Creation of domain agnostic bots empowered by LLM(s) and pre-designed set of data management tools, to facilitate seamless integration with user’s knowledge base, user’s search trajectory. www.aestima.io

Industry
Technology, Information and Media
Company size
2-10 employees
Headquarters
Riga
Type
Privately Held
Founded
2023

Locations

Employees at AESTIMA

Updates

  • We are building up an LLM-augmented tool designed by researchers for researchers. Aestima Superbot is a response to the issues frequently faced by any researcher dealing with texts as a primary type of data, such as fragmented research workflow, lack of nuanced approaches in existing search engines, inconsistent reliability and accuracy of digital search tools, data extraction, and storage, challenges with collaborative user-generated content utilization, competitiveness with tech-savvy researchers.   Aestima Superbot is a domain-agnostic bot empowered by OpenAI models and a pre-designed set of data management tools, to facilitate seamless integration with a user’s knowledge base to fit different research trajectories. It represents an advancement in providing accessibility and availability of AI-powered evidence analysis and synthesis. Looking forward to having your critical view on the subject. Any advice is welcomed!

    AESTIMA | LinkedIn

    AESTIMA | LinkedIn

    linkedin.com

  • Have spent a couple of days testing Aestima Bot and preparing a report for assessing the expert's opinions about the current valuation of Apple Inc. stock (AAPL). Took me about 4 hours to digest 43 sources of various types, including YouTube videos, web pages, and articles. The research workflow included: bulk sources upload, sources relevance check, data extraction, data summarization, hallucination check, and final synthesis. Once our tech gurus automate this workflow, we shall be able to produce reports like this literally within no time. For any topic. You can do it too. The report follows.

  • Aestima Superbot employs various ways to improve alignment and reduce LLM hallucinations. Those include: Number 1. Aestima Superbot supports different types of advanced prompting scenarios that are grounded on scientific research: ZeroShot, ReAct, DialogWithTools, and RCI with human critique. These prompting scenarios are aimed at preventing LLM hallucinations from happening first by using external sources hand-picked by the user and second by forcing LLM to apply its internal reasoning capabilities. Please see the video on prompting scenarios for more information. Number 2. Several of Aestima Superbot's advanced prompting scenarios are augmented with an explicit demonstration of the LLM's reasoning steps behind the answer. This addresses the black-box nature of LLM-based tools and helps in understanding the decision-making process of LLMs. Number 3. By offering integration with a user-curated Zotero knowledge database, Aestima Superbot makes sure that the sources you use are relevant precisely to your field of research. This integration greatly serves the purpose of both, the alignment with user queries and reducing hallucinations. Number 4. Aestima Superbot implements our hand-crafted tools based on the LangChain framework. These tools, among other purposes, are used to handle input context. By allocating parts of context between the LLM and the tools, Aestima Superbot optimizes requests to the LLM so that the context contains only refined and relevant information. Refinement is achieved by sequential application of LLM inside the tool and/or embedding and chunks similarity search. Number 5. By restricting a chunk size to 1500 tokens, Aestima Superbot sets the limit to the basic data unit processed by the LLM. The 1500 token size allows the LLM to provide enough context while decreasing the information noise. Please see the video on chunks for more information. Number 6. Aestima Superbot provides references for answers to all the user queries where a source has been used (subject to the chosen scenario). You can always see a chunk of the source text used during the text generation to make sure that no hallucinations occur. Number 7. Aestima Superbot offers the built-in fact-checking service augmented by LLM to perform a reversed sequential check that aims to identify hallucinations by cross-referencing the output text with the reference text. Number 8. Aestima Superbot automatically performs statistical validation of the LLM's output results by calculating text similarity statistics. Please see the video on metrics and statistics for more information. None of the individual approaches is perfect, but by acting diligently and thoughtfully and applying some or all of these approaches user can greatly enhance the alignment of LLMs with his or her queries and minimize the risk of hallucinations.

  • LLM hallucinations

    View organization page for AESTIMA, graphic

    214 followers

    A couple of days ago have met with my old-time colleagues who complained about LLMs' truly annoying feature of hallucination. In the follow-up post, I will unveil what we offer within Aestima Bot to take care of this headache.

  • A couple of days ago have met with my old-time colleagues who complained about LLMs' truly annoying feature of hallucination. In the follow-up post, I will unveil what we offer within Aestima Bot to take care of this headache.

  • AESTIMA reposted this

    We are inviting researchers, who have a bit of time and a lot of aspirations for Large Language Models practical application to test our product. It's still an early-stage beta version, but it's working! We offer user support, 200 USD of token credits, and an opportunity for further collaboration !!!

Similar pages