Finovate Fall Recap

Finovate Fall Recap

By: Tyler Brown

SEPTEMBER 17, 2024

Finovate Fall last week signaled topics that were top of mind in fintech and which companies had the ambition and traction to present. Artificial intelligence (AI) was wildly popular at the expo and in the general session — we counted eight general-session presentations focused squarely on AI and generative AI. Unlike in recent years, Banking-as-a-Service and embedded finance were more often framed as risks than opportunities, and conversations about crypto were sparse and brief.

Some things we noticed from the Finovate expo, demos, and sessions:

  • Applications of AI were everywhere. Nearly a whole day of content was dedicated to AI. Companies demoed applications in natural language queries and instructions, document ingestion and structuring information, natural language reporting, and for conversational banking. Notably, nearly all of these applications were for internal use with the option to be trained on internal data sets.
  • The contraction in BaaS was clear. On-stage attention focused on third-party risk management and the impact of consent orders. A BaaS platform demoed, but no booth stood out with an embedded-finance solution. One vendor offered automation for risk and compliance specific to advertising and support conversations, including monitoring for brand and fintech partners.
  • Fraud prevention and BSA/AML solutions were themes. Solutions included AI-driven ID and document verification, fraud detection, and scam detection. Another use case was biometrics, including voice-based authentication, and deepfake detection (tools often could be configured for risk profile, tolerance, and other needs.

  • Personalization and individualization got some attention. Customer segmentation, predictive analytics, and personalized experiences were present in different solutions. One vendor advertised AI-driven personalization to optimize communications and individualize website content. Another took personalized experiences down market with no-code tools.
  • Data management was a clear need. Solutions for ingestion, centralization, observation, and deployment of data were present. The theme was financial institutions’ use of different data sources, including extracting unstructured data, running analytics on sources of information controlled by or plugged into the FI, and being able to use it in the business.
  • A presentation featured CCG Catalyst research. The Open Banking track keynote on Wednesday featured insights from our recent report, “US Open Banking 2024.” The presentation addressed the philosophy, technology, and policy of open banking; key tenets of the proposed rule; regulatory scope and risks to FIs; and a framework for the industry’s next steps.

CCG Catalyst will be at Money 20/20! Stay tuned for the recap.



Data Strategy in the Age of AI

SEPTEMBER 19, 2024

By: Tyler Brown

Artificial Intelligence

Using data effectively is a core business priority in banking, suggests a sample of C-level bankers CCG Catalyst surveyed in 2023. When asked to choose their top three priorities for the next 5 years, 50% of respondents picked “leveraging data analytics to improve operations and opportunities.” With legacy infrastructure, years of technology debt, and divisions within the organization itself, that’s much easier said than done. Without a single source of truth, financial institutions (FIs) face redundant, missing, or conflicting data. That data fragmentation can hamper an institution’s day-to-day operations and long-term strategy.


Fragmented data is a sales and customer retention problem. Different pieces of data, like demographics, product ownership, spending behavior, or financial goals, are isolated from one another, preventing a holistic view. As a piece in the ABA Banking Journal notes, data may fail to line up with what another department sees, siloes don’t have immediate access to certain data sets that may help them do their jobs, and data provided by different vendors may not be compatible in the first place, forcing manual work and suspect results.

In the age of artificial intelligence (AI), efficiency is the tip of the iceberg for data management. AI models need consistent, correct, and up-to-date data to function well; fragmented data must first be sourced, formatted, and merged in ways that make it useful. Tens or hundreds of applications within different business units for different use cases will make it hard to do that. To effectively use AI models in the long run across the bank’s functions, data must be centralized and synced in real time.

To reach this end, FIs need a data strategy — establishing what data they will collect, how they will manage it, and how they will use it — driven by business strategy and supported by the right technology. That takes technical chops within the FI and a vendor-assessment process that acknowledges the role that data will play in the FI’s long-term success. It isn’t just about internal data sources controlled by the FI. External data sets are also a crucial source of business intelligence and analysis.

As we’ve written, there are a few crucial things to know about the data an FI holds before it implements a data strategy. Knowing what data you have, where it comes from, and its quality are early steps:

  • Data storage: Where does customer data go once it’s produced? How do you centralize that data when it’s siloed between systems?
  • Data ingestion and aggregation: What zero- and first-party data do you have, what third-party data do you have access to, and how do you extract and import it?
  • Preparation: How is your data formatted when it’s extracted? What needs to be done to clean and standardize it in ways that make it usable for analytics?

Modern infrastructure is also fundamental, and better data governance is another step — sharpening the people, procedures, systems, and metrics that help an organization control and have visibility into its data.

An FI’s data is a vital competitive advantage — particularly what it knows about its customers. But data is of value only when it’s usable, and the volume of data required to train and feed AI-driven systems make that imperative even more important.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics