Architecting Seamless Data Integration
Author: Mike Fratto, Product Manager
In modern software design, system-to-system integrations are increasingly the status quo necessary to achieve mission-critical goals. This is especially true in today’s government environment where systems connect both within the same government agency, and across agencies, to hit business priorities. In such a landscape, it is imperative for software teams to be smart and deliberate about architecting their systems to handle the incoming and outgoing flow of data.
That is the case on one of the programs Amivero serves. At a high level, the program supports a case management system used by federal agents to process applications from the public.
Sparked by the agency’s desire to disincentivize paper processing, a need arose for a team to tackle the system’s digitial intake – that is, its receipt of case application information via JSON payloads. Amiverians serve in key positions on that intake team and have had to make key decisions along the way.
Our approach was centered around the principle of 'separation of concerns,' a fundamental software design concept advocating for systems to have a narrowly defined set of responsibilities. For the case management system, interfacing with external systems to receive application payloads is a distinct function, separate from its other operational responsibilities. To manage this effectively, our team decided to develop a dedicated intake application, streamlining the broader system’s functionality. In other words, by building an intake app that lives on its own, the team has simplified the work of the rest of the systems that make up the entire case management system.
After a year of working with one upstream partner which sources application data, a new challenge arose for the team when Business priorities dictated that the intake app start receiving payloads from a second upstream data source. The crux of the problem was that the second source system had a very different payload structure than the existing partner. The team first tried to negotiate with the new partners to adjust its payload to match the existing structure it already was receiving. However, challenges arose with that plan because the second partner was already sourcing data to other systems who conformed to its payload structure. Inquiries into the ability for the second source system to customize or fork their logic to accommodate the existing payload structure proved unrealistic as well. The line was drawn, the intake app would need to handle two different incoming payload structures.
Recommended by LinkedIn
We implemented a solution where the intake system normalizes incoming data into a common data format, maintaining consistency throughout the subsequent intake processes. This adaptation not only preserved the integrity of the data flow but also optimized the overall system efficiency.
The intake system’s design is inherently event-driven, allowing for asynchronous, background processes that manage everything from data validation to the final case creation. This method ensures that while most of the intake processes are automated, critical checkpoints are in place for human oversight. For instance, when data anomalies are detected, users can intervene through a custom-built dashboard, adjust the discrepancies, and resume the automated processes. With that architecture in place supporting two different incoming payload structures, one additional keystone is monitoring the health of production. There is a set of notifications and tracking that allow the Product Team the ability to visualize the flow of filings and intervene when necessary.
This blend of automation and human interaction underscores the importance of our approach – a commitment to a human-centered, data-driven solution.
-------------------------
Please share your thoughts, experiences, or questions in the comments below. We'd really like to hear from you. If you’re interested in learning more, email us anytime at info@amivero.com.