The path to intelligent asset management

The path to intelligent asset management

For over 75 years, a global asset management company has helped its clients achieve their financial goals. Millions of customers—both individuals and institutions—from over 150 countries trust the firm with their investments.

The company prides itself on its track record of developing innovative products and tools. It also aims to empower employees always to do the right thing for its clients.   

Individuals within the company’s sales and distribution team – known as market leaders – had recently found themselves experiencing information overload. 

While these market leaders had access to copious amounts of structured data – abundant in a highly regulated industry like asset management – this data was held in a range of siloed solutions, including SQL databases, many of which were underutilized because of their complexity. 

Unable to quickly access the information they needed, market leaders spent too much time sifting through this data, which delayed their responses to clients. 


Fractal’s team—which leverages cross-function domain expertise, including engineering, design, and data science—set to work on building a versatile GenAI-powered virtual assistant that was not only easy to access but also promised to facilitate better decision-making for the company’s market leaders and, therefore, improve both efficiency and productivity.  

To address the team’s skepticism about GenAI, Fractal developed a proof of concept (POC) for a solution called Fluent. Fluent is a secure, versatile, intelligent assistant trained in the company’s data and expertise. It answers questions about the company’s funds and performance, performs document and web searches, and assists with tasks.

The POC, which took around three weeks, was a success, and the Fractal team kickstarted the development of both Fluent as well as a GOE chatbot using the Django REST framework for the backend web application programming interface (API) and the ReactJS framework for the frontend user interface. To enhance human-computer interaction via a vital comprehension layer and to accommodate growing datasets over time, Fractal used a range of GenAI models and services from Microsoft, including Azure OpenAI large language models (LLMs), Azure AI Document Intelligence ,and Azure AI Search. 

It took Fractal around six months to design and deploy the first phase of the solution. With continuous engagement, the alpha stage of implementation took around three months, and the beta stage took another three months.

The solution is already proving effective. Search recall scores, which measure the effectiveness of the Azure AI Search in the entire system, are already reaching 90-95%, demonstrating that, in most cases, the service will return the exact matching information source relevant to answering the question. Meanwhile, the answering rate – the number of instances when the LLM can answer the question – is around 90%. 

The next steps include reducing the application’s latency to ensure a great user experience. This will be achieved using faster services, optimized logic flows, and removing unnecessary extra features. Following this, the client expects engagement with the application to increase over time, bringing simplified processes, improved access to information, and reduced overall cognitive load associated with existing day-to-day tasks.


Get AI delivered to your inbox!

Click here to subscribe to our quarterly magazine and stay ahead of the curve with exclusive insights, expert analysis, and the latest trends

To view or add a comment, sign in

More articles by Fractal

Insights from the community

Others also viewed

Explore topics