Release 5

This page is part of the FHIR Specification (v5.0.0: R5 - STU). This is the current published version in it's permanent home (it will always be available at this URL). For a full list of available versions, see the Directory of published versions

FHIR Infrastructure icon Work GroupMaturity Level: N/AStandards Status: Informative

FHIR offers numerous architectural approaches for sharing data between systems. Each approach has pros and cons. The most appropriate approach depends on the circumstances under which data is exchanged. This page provides an overview of each approach, with pros, cons and a decision tree to help guide choices about which approach best suits a specific set of circumstances.

The recommendations expressed here reflect the general beliefs of the community based on experience to date. Even so, there are other considerations that impact implementation choices, including legacy infrastructure, the existing capabilities of communication partners, etc. Following the guidance here is not required to claim FHIR conformance but adhering to these recommendations is likely to result in lower long-term implementation costs, more interoperability partners, etc..

Not all the options listed here were defined in early versions of FHIR and it is likely that new exchange alternatives will continue to be identified by the FHIR community over time. As these approaches become more broadly used, more standardized and experience accumulates within the community this documentation will continue to be updated. Feedback on the advice offered here is welcome - simply use the Propose a Change icon button at the bottom of any FHIR specification page.

This page focuses on data access mechanisms that are either defined within the core FHIR specification or are otherwise implementable using the interfaces defined by the FHIR specification without requiring any additional standardization. Additional access mechanisms may be defined in other FHIR implementation guides or within other specifications. There is, for example, a community effort outside the HL7 standardization process to allow a relational query interface to access FHIR information (see SQL-on-FHIR icon).

To encourage consistency across implementation guides, each guide should identify the path(s) taken when navigating the tree to support their choices. This will also provide a basis for justifying situations where approaches vary.

Note: Implementation guides SHOULD provide an explanation of the considerations (whether drawn from this document or otherwise) that drove the selection of the interoperability approach chosen for the guide and the use cases it is trying to solve.

The guidance provided in this section is focused on the choice of technical exchange mechanism. It does not cover higher level design decisions, such as what sorts of exchanges should occur, what parties they should occur between, what points in the workflow the exchange should happen in, etc. These decisions should be made prior to trying to apply the guidance here. Also, Implementation guides SHOULD provide an explanation of the considerations (whether drawn from this document or otherwise) that drove the selection of the interoperability approach chosen for the guide and the use cases it is trying to solve.

This section focuses on the technical alternatives that enable exchange between data sources and data consumers. All such exchanges, and the decisions about the mechanisms used, will also need to take into account the creation of and adherence to regulations, contracts, business arrangements and other conventions and ensure that the information shared is 'appropriate' to the use-case. Implementers and designers may want to consult legal counsel as part of their design process.

This section does not address expectations around security or authentication (except where specific exchange mechanisms have specific tradeoffs around what security or authentication mechanisms are possible). Designs will need to ensure that appropriate security mechanisms are in place for whatever data is shared. (Refer to the [security checklist] for additional considerations in this space.)

This section is also intended to have international scope and the terminology used here is used in its traditional English usage, not necessarily as it might be defined in particular jurisdictions. For example, the term 'sensitive data' in some jurisdictions has a specific legal meaning. When used here, it simply means that the data may need some additional level of protection, whether for privacy, business or other reasons.

NOTE: This document focuses on exchanging information using FHIR but, in theory, the principles described here would apply equally well to equivalent exchange mechanisms using other standards. For example, FHIR messaging would be like HL7 V2 and FHIR documents would be like HL7 CDA. If using other standards, the set of architectural approaches to exchange are likely to be more limited than those available for FHIR.

This section is broken into several components:

The guidance provided here focuses on specific architectural trade-offs between different FHIR exchange approaches and provides guidance on what approaches are best used in particular circumstances, all other things being equal. However, the reality is that it is rare for "all other things" to be equal. Designers will need to take into account other factors, including:

  • What approach(es) fall within the reasonable technical capabilities and development capacity of the systems that will be participating?
  • What legacy solutions already exist and how easy will it be to migrate those solutions to the specified approach?
  • Are there regulations, business policies, or privacy and/or security concerns that mandate or prohibit particular approaches?
  • What impact does the proposed solution have on performance/response time?
  • Will the exchange mechanism allow appropriate constraint on data retrieved (whether enforced by data source, data consumer or both)?
  • etc.

The following table provides an overview of several different approaches to data exchange. Each approach includes a short description and a link to a page providing more detail about the approach, including an interaction diagram demonstrating the flow pattern between the different systems. Each approach also includes a rating that reflects the degree of community support for the approach and the long-term reusability associated with implementing the approach. Both are ranked as either 'low', 'medium' or 'high'.

  1. Re-use indicates the amount of reusability is likely to be achieved from implementing a specific interoperability solution, where:
    1. 'High' indicates that the approach is likely to be applicable to a broad set of use-cases with minimal/no redevelopment and little/no negotiation required to establish a functional interoperable interface.
    2. 'Medium' indicates that the approach has a reasonable chance of utility for at least some other use-cases, though additional development and/or negotiation or configuration may be required.
    3. 'Low' indicates that the solution is generally use-case specific and support for additional use-cases using the same mechanism is almost certain to require additional development. Negotiation will be required to define what data will be sent and how it will be requested. Note: 'negotiation' means that technical discussion and agreement will be required but does not necessarily imply a need for contractual agreement.
  2. Adoption indicates whether or not systems that have implemented FHIR for data exhange are likely to support the approach, where:
    1. 'High' indicates that many/most systems that support data exchange will support the approach.
    2. 'Medium' indicates that there are a reasonable number of implementations of the capability, but support is not widespread.
    3. 'Low' indicates that few systems other than some of the reference implementations are known to have production support for the mechanism.

    Rankings are approximate and might not hold in all environments. Also, in environments where FHIR adoption is low, even a 'high' ranking might not result in much support in that implementation space.

Assertions around the degree of re-usability and community adoption are not country-specific and are not based on formal measures, but rather from an informal subjective evaluation based on what's been observed in the area of implementation, questions, global and regional IGs and feedback from implementers. There are no formal measures of these characteristics available (as yet). There may be variability in terms of adoption within specific subsets of the FHIR community and adoption patterns may change over time. These considerations are intended to inform, but not constrain design decisions of individual implementation guides.

As a rule, use of less re-useable and/or less adopted communication mechanisms will require more negotiation to achieve interoperability, and therefore the costs associated with the solution are likely to be greater. That does not mean that architectural circumstances will not make these approaches necessary (and justify the increased cost), merely that if there is a choice between equally viable architectural alternatives, consideration should generally be given to the one with lowest overall long-term cost to the interoperability community. Reducing overall cost to industry (payers, providers, patients and care-givers, etc.) is expected to be beneficial to most, if not all participants in the long term.

The table below is ordered first by those that maximize long-term re-use and then by those that are most supported in real production systems. All things being equal, those appearing earlier in the table are likely to be better choices. However, all things aren't equal, so consider the ratings in the table in parallel with the decision tree found following the table, i.e. a 'High/High' rating does not imply that the solution will be the best option for a particular use-case. However, if after evaluating the options based on the decision tree, there are multiple possible options, their respective adoption and re-usability ratings may help to choose between them.

The size of the list of approaches (and the size of the decision tree diagram following) may seem overwhelming. Remember that there is no need to support all these approaches or even most of them. All of them are appropriate for use for some use cases. However, systems (and implementation guides) only need to worry about the approaches relevant for their own needs. Hopefully, this document will provide the necessary overview and guidance to allow the selection of the approach(es) that will best fit your implementation needs.

Approach Re-use Adoption Description Example
read High High The data consumer uses the FHIR read operation to retrieve the current record of a resource with a known 'id'. A payer has received a prior authorization request that includes a reference to a Practitioner resource that the payer does not have a local copy of. The payer executes a read to retrieve the referenced record.
RESTful search High High The data consumer uses the FHIR search to describe the desired data and, using _include and _revinclude, the desired resources and the data source returns the requested information if available. A SMART app is guiding the completion of a questionnaire. It executes a query for recent patient medications to provide options for the user when filling out the form.
batch search High High The data consumer sends a FHIR batch request to the data source containing multiple search, _filter, query and/or operation requests. All the requests are executed, and the responses are returned in a batch response. A payer system evaluating a claim needs specific information about specific lab results and medications, with different time periods relevant for different lab results. To save time, rather than executing separate searches for the medications and each category of lab information, it packages all the queries into a batch and transmits them in a single operation.
Polling High High The data consumer queries the data source at regular intervals checking to see if there is new data that matches a specific set of criteria. An imaging center has submitted a prior authorization request and the initial response was 'pended'. The data consumer is now checking back on a regular basis to see if a decision has been made, and if so, what the answer is.
REST create High High The data source POSTs a single resource instance to the RESTful endpoint of the data consumer. A payer places a new Insurance Plan in a central registry available to EHRs.
REST update High High The data source PUTs a single resource instance to the RESTful endpoint of the data consumer. An EHR updates demographic information about a specific Practitioner on a shared registry.
Batch Bundle High High The data source creates a 'batch' Bundle requesting the creation and/or updating of various resources and posts it to the RESTful endpoint of the data consumer. An EHR has a collection of updates and new providers they want to communicate to a registry. Each provider record is independent of the others.
Transaction Bundle High High The data source creates a 'transaction' Bundle requesting the creation and/or updating of various resources and posts it to the RESTful endpoint of the data consumer. A payer wishes to post a Practitioner and all of their associated PractitionerRole instances to a registry. The PractitionerRoles point to the Practitioner. The payer does not want any of the records to be created unless they all are.
vread High Moderate The data consumer uses the FHIR vread operation to retrieve a specific version of a resource with a known 'id'. A payer has received an Encounter containing a version-specific reference to a Condition as the 'reason for admission'. The payer retrieves that specific version because they need to understand "what was known/believed at the time of admission" and the current Condition record may have subsequently evolved.
history High Moderate The data consumer uses the FHIR history operation to retrieve a list of all (or a filtered subset) of the versions of a resource with a known 'id'. A payer is synchronizing their local practitioner registry with a centralized registry. They regularly query to retrieve all updates that have occurred in the last hour.
CDS Hooks icon High Moderate The data source makes a CDS Hooks request to the data consumer who is acting as a CDS Hooks Service. The service than returns 0..* cards containing decision support content. A provider is creating a new lab order. The EHR uses CDS Hooks to check whether prior authorization might be required for any of the ordered tests.
CQL search High Low The data consumer invokes an operation specifying CQL to be invoked on the data source that will select and return a set of desired data and receives back a search Bundle. An EHR retrieves CQL that describes information to include when submitting a prior authorization. The EHR executes the CQL and, after review for accuracy, includes the package of information as part of the prior authorization request.
_filter search High Moderate The data consumer uses the FHIR search, _filter or GraphQL mechanisms to describe the desired data and the desired data and the data source returns the requested information - if available. A payer is querying observations from an EHR. Minimum necessary rules mean that to filter to only receive the information to which they are entitled, the query must include complex logic of nested 'and' and 'or' clauses. _filter allows the desired data to be retrieved in a single call rather than multiple consecutive calls.
Subscription High Moderate The data consumer configures a subscription on the data source describing the type of data of interest and the events that should cause notification. When the described event type(s) are triggered on the data source system, the data source pushes a notification to the data consumer either containing the desired data or prompting for the data consumer to query for the desired data. A payer has submitted a rquest to a provider for data in support of a claim. The payer creates a subscription on the provider's EHR that will notify the payer when the submitted Task has been updated - either with new status information or a link to the requested data.
Task High Moderate The data consumer creates a Task either on the data source's system or a system monitored by the data source requesting the sharing of data. The data source then updates the Task with agreement to perform, progress status, and eventually a link to the requested data. The data consumer monitors the Task by subscription or polling. A payer needs the clinical data that 'supports' an emergency surgery. The payer does not know where the relevant supporting information is stored or in what form it might take (lab results, radiology reports, PDFs, etc.) To avoid looking at irrelevant data, the payer submits a Task to the EHR asking a clinician or administrator to locate and return the relevant documentation.
GraphQL search High Low The data consumer uses GraphQL to filter data from a resource instance, a RESTful query or an operation outcome, including information from related resources and selecting the specific elements desired and optionally flattening structures. A web-based member-facing application needs to provide a summary of recent claims. It uses a GraphQL API to retrieve only needed elements organized in a manner tuned to the web page's layout, minimizing complexity in the design of the web client.
REST patch High Low The data source uses PATCH to revise the information in a single resource instance via the RESTful endpoint of the data consumer. A payer maintains a List resource containing the patient's current problem list (as understood from the payer perspective). (They have many other Condition resources they maintain for other reasons, so the List is necessary to provide a filtered 'problem list' view. A patient submits a request to adjust one of the items on the List. The request comes as a differential rather than a complete replacement of the entire list.
SPARQL search High Low The data consumer uses a local SPARQL engine to manipulate triples accessed by hitting a data source's RDF endpoint A payer has a knowledge repository that, together with various ontologies, allows the payer to reason about patients who would likely benefit from (and have better outcomes/long-term costs) if they were on a new treatment. The payer uses SPARQL to access the EHR's clinical data to identify such patients and make recommendations to the patient's capitated care provider (Note: This is an example of how this data exchange mechanism could theoretically be used. It is not intended to define how any particular IG will define data access.).
FHIR Document Moderate Moderate The data source assembles a collection of FHIR resources into a human-readable document and transmits it to the data consumer. A member has shifted their coverage to a new payer. The new payer wants to retrieve a list of active treatments from the old payer. The old payer provides a 'transition of coverage' document that organizes relevant aspects of the patient's current claims, treatments, and other information in a contextual way to help the new payer maintain continuity of care.
Collection Bundle Moderate Moderate The data source assembles a collection of related resources into a 'collection' Bundle and creates or updates it on the data consumer's Batch endpoint A payer has a collection of information that has been provided as part of an X12 claims submission. Internally, they wish to store the information as FHIR, but they want to retain the information in a 'package' representing all the information that came in a single submission.
CommunicationRequest Moderate Moderate The data consumer creates a formal CommunicationRequest order and uses one of the workflow communication patterns to ask the data source to fulfill the order. The results are then returned using one of the push mechanisms. A provider creates a formal order authorizing (on a patient's behalf) the disclosure of clinical information from a third party to a payer.
Query search Low Low The data consumer invokes a custom query operation on the data source and receives back a search Bundle. A group of payers wish to expose their claims data to patient applications, but their back ends do not support a full RESTful query interface. Only certain parameters can be used and only in specific combinations. The group defines a standard _query operation that allows retrieval of a collection of information related to a claim, with only a limited set of parameters.
FHIR Retrieval Operation Low Moderate The data consumer invokes a custom operation on the data source requesting information by parameters on the URL and/or in the body and the response to the operation (synchronous or asynchronous) contains the requested data. A payer wishes to receive a summary of average inpatient stay durations for a particular type of service. It invokes the operation and the EHR calculates the current average for the past year and returns the result.
FHIR 'Process' Operation Low Moderate The data source invokes a custom operation on the data consumer passing information it wishes to convey in the body of the operation or as references, asking the consumer to 'process' the information. Processing might or might not result in some or all of the information being stored or otherwise consumed and acted upon. An EHR wishes to submit a prior authorization via FHIR, but regulations require that the payer receive the information over X12. The EHR invokes an operation on an intermediary that passes relevant prior authorization resources to an intermediary that takes care of the X12 conversion
FHIR Messaging Queries Low Moderate The data consumer sends a FHIR message to the data source requesting information and the data source responds (synchronously or asynchronously) with a message containing the requested data. A payer is interacting with an EHR financial system that uses a v2 interface. Data is converted from v2 into a FHIR message to ensure the data flows to the payer in a familiar form.
FHIR Messaging Notifications Low Moderate The data source sends a FHIR message to the data consumer providing information related to some sort of event that has occurred. An EHR notifies a payer that a patient has been admitted and provides additional information, such as patient condition, etc.

With so many possible approaches to exchanging data, implementers need guidance on how to choose the appropriate choice for a given use-case. In practice, the choice will be influenced by existing infrastructure, architectural preference, and other considerations. However, the following diagram and associated descriptions aim to provide "best practice" guidance about what approach to use in which circumstances. Adhering to the recommendations here will help reduce long-term implementation effort and cost across the full data sharing community. It will also help increase the likelihood that FHIR solutions that are designed independently will land on the same architectural approach. In all cases, if you're uncertain of the best architectural approach for a given exchange, raise your questions on the chat.fhir.org implementer's forum icon. The community will often be able to provide additional considerations, guidance about what solutions already exist in the space, and nudge you toward the solution that is most likely to successfully meet your requirements and integrate with other solutions.

The following diagram provides a decision tree to help guide the selection of a data exchange approach. Each decision branch includes a hyperlink to a section further below that provides a detailed description of the considerations in making the choice for the decision point. Each "exchange option" will link to a web page that provides a detailed walkthrough of the interoperability pattern and further guidance on its use - as well as pointing to the relevant portions of the FHIR spec that describe how to implement the approach. While it is possible to use the decision tree while only paying attention to the branches 'relevant' for your specific use-case, readers are encouraged to familiarize themselves with all the content so they can be sure they're not excluding options that may be relevant to their requirements.

Humanintervention?noyesAutomatedyesCDSHooks?CDS HooksPull additionalinformationyesIs datapre-existing?nono - seemsg or op pullResource-basedqueryyesReturnresources?no - SeealternatequeryFormal authorization/detail needed?noyesTaskCommunicationRequestdecide resultretrievalyesDirectconnection?(pull)no - See messaging queryyes (pull)Consumerinitiates?no - See pushExisting ResourcesyesSingleresource?Currentversion?yesreadvreadyesResourcehistorynoRESThistoryasync?yesAd-hocquery?nonoDefined query_queryasync?yes_querysearchable?no - Seemsg or op pullAd-hoc query?yesRESTsearchable?REST searchasync?yesBatchsearchable?noBatch searchdeterminebatch contentasync?yes_filtersearchable?no_filterasync?yesCQLsearchablenono - Seedefined queryCQLasync?Alternate queryyesGraphQLsearchableGraphQLunderlyingread,searchoroperationasync?yesCQLsearchablenoCQLasync?yesSPARQLsearchablenono - Seemsg or op pullSPARQLGroup PersistanceGrouptransmission?noyesNew recordfor consumer?yesnoIndividualcreateUpdate wholeresource?yesnoIndividualupdatePatchTransactional?noyesBatchBundleTransactionBundlechoosecontainedactionsMsg or op pull?Ismessage-like?noyesFHIRRetrievalOperationAsync?Synchronous?yesnoSynchronousAsynchronousMessaging QueryFHIRMessagingMsg or op push?Ismessage-like?noyesasync?FHIR 'process'OperationMessaging NotificationFHIRMessagingPushConfigured byconsumer?yesnoSubscriptioncapability?noyesPollingchoosepollingsearchPushnotifications?yesnoSubscriptionwith dataSubscriptionwith queryFocus onpresentation/story-telling?noyesCollectionBundleFHIRDocumentsyesPersist asa group?noyesData source directsconsumer persistence?no - Seemsg or op pushyesDirectconnection?(push)no - Seemsg notify FHIR exchange decision tree

The first consideration is whether the desired data should be 'pulled' or 'pushed'. In the data consumer-initiating 'pull' scenario, the event that initiates the exchange of data occurs in the data consumer system. This might be a user clicking a button, or some sort of internal event that triggers a need for data. The data consumer system is then responsible for determining what data to retrieve and when it should be returned as well as for initiating the flow by communicating with the data source.

In the data source-initiating (or 'push') exchange, the event that determines the need for data to flow occurs in the system that owns the data. This might be driven by human action, the creation or change of the data to be shared, or some other system event that makes it necessary to make the data consumer aware of a specific set of data.

Notes:

  • Data source-initiated data flows may include expectation for behavior on the part of the receiver, while data consumer-initiated flows generally don’t impose any such expectations - the consumer decides what to do with the data they have asked for.
  • Some data consumers may be reluctant/unable to initiate data transfer for legal/policy reasons. I.e. It is deemed acceptable to be pushed data, but not to pull it. Typically, this stems from right of access/permissions concerns where only the data source is deemed to have authority as to what information should be shared and when.

If the data consumer will initiate, the next choice is whether there is a direct connection between source and consumer. If the trigger for exchange will be driven by the data source, the next decision is whether the need to share the data is configurable by the data consumer .

Return to diagram

The next question is whether the data consumer and data source can talk to each other directly. In some interoperability environments the two systems might not have any means of knowing the network address of the other and all communication must happen through an intermediary. In some cases, the communication can happen via a network proxy where a request is automatically forwarded to a secondary location (potentially determined by the content of the request). In some environments, there may even be multiple levels of proxying, though this can limit the ability of the system to perform in a synchronous manner. From the perspective of this decision tree, proxied connections are still treated as a 'direct' connection. Both direct and proxied communications proceed through the tree to the next decision point of whether human intervention might be required.

However, in other environments, information exchanges are 'routed'. The initiator of a communication specifies a logical identifier of the desired target system and passes the communication into a network of intermediaries which then route the communication so that it eventually reaches the appropriate network address. In this routed form of exchange, the only option for exchange is query-based messaging.

NOTE: There is work underway as part of the U.S. FAST initiative to define an additional layer on top of FHIR RESTful interfaces that will support indirect delivery over RESTful interfaces. More about the project can be found here icon. This may reduce the need for message-based approaches in the future.

Return to diagram

When a data consumer asks for information from a data source, the next question is whether the request is purely at a system-to-system level where the request will be exclusively processed by software or whether there is the potential for humans to be involved in the preparation of the response. Human involvement means that the request must arrive in a form that can be persisted until a human can look at it and with sufficient contextual and descriptive information for a human to understand what is needed. Human involvement in responding to a request for data might occur for many reasons:

  • The data might not exist in queryable electronic form. It might only exist on paper, in someone's head or in some non-integrated third-party system.
  • The data might not exist in a predictable location. For example, different systems might store the relevant information in different resources, using different codes or categories, etc. This means that the data consumer would not know the appropriate system-to-system query to use. However, a human being at that specific location/organization would be aware of the organizational conventions and have a much better idea of how to find the data. (Obviously, this need is minimized as standards impose tighter conventions for data representation and coding. However, gaining consensus on and getting implementation of those standards can take considerable time.)
  • The data source might not have the ability to filter the available data down to what is needed in a situation where it would be impractical or inappropriate to share all the candidate data. A human who knows the system, on the other hand, might have access to local tools/knowledge that would allow the appropriate record(s) to be found and/or filtered.
  • The description of the data to be returned and/or the description of the context that justifies the sharing of the data might not be amenable to expression in a computable form, therefore requiring interpretation/evaluation by a person familiar with the data source to 'execute' the request. For example, "Please provide all data that supports the decision to order surgery X" would not necessarily be something that could be resolved merely by traversing links in the data.
  • There may be a need to organize, synthesize, and/or filter the data in a way that requires human cognition. For standardized frequent queries, automation is often possible, but in some cases, automating all the possibilities is impractical.
  • There may be a lack of trust between the data consumer and the data source, such that the data source wishes to have human review of any data provided to ensure that the sharing is appropriate, necessary redaction is applied, appropriate approvals are in order, etc.
  • There may be differences in regulatory expectations around data that is queried directly vs. information that is 'pushed' in response to a human-mediated request, such that the latter is more practical/cost-effective overall. (Though with this argument, care should be taken that the distribution of costs is equitable, rather than simply being offloaded from the data consumer to the data source.)

Allowance for human intervention does not necessarily mean there will always be human intervention. It is possible that some data sources will have an ability to handle certain requests in an automated fashion while others will be delegated to humans. It is also possible that a data source will evolve to be capable of handling certain requests automatically that it could not in the past. The key thing is that the interface is designed to allow for human intervention.

Interfaces that allow for human intervention are intrinsically asynchronous, though in some cases the asynchronous response may come quickly. In general, automated interfaces are preferred to those allowing for human intervention. First, if human intervention is required, that has a significant cost due to the expense of human time. Second, human-intervention mechanisms involve more technical overhead (more layers, more data structures) which creates higher implementation and testing costs. Finally, the lack of support for synchronous access means the solution is not a good fit for certain use-cases, which may force multiple parallel access mechanisms (one for synchronous with no human intervention and one asynchronous with human intervention).

If the exchange will potentially require human intervention, the next decision is whether or not formal authorization is required. If the exchange will be fully automated, the next decision is whether CDS Hooks is a candidate.

Return to diagram

The CDS Hooks icon specification should be used if the driver for the exchange is a user action within the data consumer where the user is visually interacting with that system and the desired results will be clinical decision support guidance from the data source about that current action. The specific requests and responses are not expressed as FHIR, but the data describing the current action and additional contextual information generally will be. CDS Hooks also allows the data source to access contextual information from the data consumer to provide information necessary for the decision support logic. This internal data retrieval by the decision support engine would use RESTful search or possibly one of the search alternatives beneath it.

If hooks are to be used, the CDS Hooks spec icon provides details on the flow. The hook service (data source) may take on the role of data consumer and use automated means of retrieving data to support the decision support it provides. If CDS Hooks is not a good fit, the next question to consider is whether the data to be retrieved is pre-existing.

Return to diagram

When requesting data to come back as resources, there are two possibilities - the data consumer is seeking data that is already available as existing records in the data source's system; or the data consumer is asking the data source to generate net new records - though potentially based on existing information. For example, retrieving previously recorded blood pressures for a patient would be "existing records", while asking for the average blood pressure for the last 24 hours would typically involve generating a record. (It is unlikely the system would already have a stored record for the average as of 'now'.) The first case is a type of query and can generally be handled by relatively 'standard' mechanisms in FHIR. The latter will require a custom operation or message and will typically mean writing code specifically to generate the desired data.

If searching for pre-existing data, the next decision point is whether the search mechanism will be returning resources. Otherwise, the choice is whether messaging is appropriate.

Return to diagram

In FHIR, data is represented using resources and, in most cases, when data is exchanged, it will be transferred as collections of those resources. Generally, the complete resource is transmitted, and the same data is shared with all systems. This approach of sharing the full resource ensures that the contextual meaning of all the data elements is retained and provides a consistent framework for validating, parsing, and consuming the shared data. However, in some cases, information may be filtered out of the resources for reasons of state or federal privacy regulations, an individual’s own privacy preferences, or more efficient communication. In other cases, the use-case may call for returning an arbitrary collection of elements of interest, potentially joining content across multiple resources, with no need/desire to retain context. This latter is less interoperable - it will not be possible for downstream systems to consume the data as FHIR, nor for it to be persisted in a FHIR-based repository without treating it as a Binary. In general, the search mechanisms that return resources are more widely supported and will require less negotiation/modification of systems than those that return independent elements, so if either approach could work, it is best to take the 'return resources' branch in the decision tree.

If the data to be returned should be a resource or collection of resources, the next decision point is whether the retrieval is only a single identified resource. Otherwise, the next consideration is CDS Hooks

Return to diagram

In some cases, the retrieval process is straightforward - the data consumer knows the id of the resource it needs, and it only needs that specific resource. FHIR has specific mechanisms for doing this that are widely supported and very efficient to execute. The id might be known because the data consumer has previously retrieved a copy of the resource and is only looking for the most recent changes. Alternatively, the data consumer might be retrieving a resource that was referenced by another resource it has a copy of.

Obviously, if the data consumer needs more than one resource - even if it just wants a resource with a known id as well as certain related resources, these special mechanisms will not work. They also will not work if the id of the resource on the target server is not known.

If only a single resource with known id needs to be retrieved, the next consideration is whether the retrieval is the current version. Otherwise, the next approach to evaluate is Resource history.

Return to diagram

There are two main mechanisms in FHIR for requesting data from a human being (or potentially from a human being): CommunicationRequest and Task.

CommunicationRequest is used to represent proposals, plans and formal authorizations (orders) for data to be exchanged. A CommunicationRequest would be appropriate for use when there is a need for a formal order, along the same lines as a prescription, lab order, referral, etc., but in this case indicating that certain information must flow to a data consumer. CommunicationRequest, like other Request resources cannot ask for action on its own. Instead, it must leverage one of the FHIR workflow communication patterns to seek fulfillment of the specified request

Task is used to explicitly ask for an action to be performed. Sometimes it is used together with a Request resource, however it can also be used on its own to ask for execution of a simple action - such as requesting someone fill out a form or return a specified piece of information. Because Task can be used to ask for an action to be performed and can, itself, track acceptance or rejection of the request, progress status of the request and eventually be updated to point to the 'output' of the request, it saves on overhead compared with using CommunicationRequest which (for any of the afore-mentioned functions) would need to be paired with Task. As such, Task is generally preferred when there is not a need for a formal authorization and when the capabilities of Task to describe the data requested are as sufficient as those available on CommunicationRequest. Even when authorization is needed, Task may still be relevant as the authorization may span multiple exchanges covering a broad set of data, while Task allows initiation of a single transfer of very specific information.

Note that an authorization (e.g. CommunicationRequest) by itself is not sufficient for information to flow, and in some cases, a single authorization may be associated with numerous folows over an extended period of time. The trigger for actual exchange will some form of workflow initiation mechanism. Frequently, that will be Task, meaning that CommunicationRequest may end up being used in conjuction with Task.

Details on using Task can be found here. Details on using CommunicationRequest can be found here. Note that, regardless of whether Task or CommunicationRequest are selected, the asynchronous response providing the requested data (or some of the requested data or an outright refusal to deliver the requested data) will be handled as a Data source-initiated (Push) event and the architectural approach for that delivery will also need to be selected. (Generally, the mechanism will be one of the Configured by consumer options as the data consumer is the system that truly initiated the transfer and therefore should decide what happens to the data. However, if an operation or message is used to transmit the Task or CommunicationRequest, then the response would come back as a message or an operation response.

Return to diagram

When retrieving a single resource by a known identifier, there are two options. The read operation retrieves the current version of the resource. The Version-specific read (vread) operation returns a specifically identified version of the resource (which might be the current version or could be a historical snapshot of the resource). The latter approach obviously involves knowing the specific version to be retrieved - generally because of a version-specific reference. It also means that the data source must support accessing historical versions. (Many systems, especially legacy systems, do not.)

Details about the data exchange ramifications of the 'read' operation are here. Details about 'vread' are here.

Return to diagram

Unlike version-specific read, the history operation returns a Bundle containing a collection of versions - either all of them, or those since a specified date. History can be reported for a single resource id, for a specific resource type, or for all resources on an entire server. The first is useful if the data consumer needs to know what is happened with a specific resource over time. The other two allow complete synchronizing of the server held on the data consumer with the changes that have occurred on the data source - either for a single resource or for all resources.

Details about the data flow for performing history are here. If history is appropriate the next question is to whether to use synchronous or asynchronous search. The latter may be necessary for large volumes. If history is not appropriate, the next option to consider is whether an ad-hoc query is needed.

Return to diagram

In an ad-hoc query, the data source allows the data consumer to combine the search parameters or use a search expression language to filter the set of data they get back. The data source may have a few rules about certain minimum query expectations or disallowed parameter combinations, but in general the data consumer is free to construct their queries as needed to support their use-case. The benefit of this approach is that the same query interface can be used by data consumers to support a wide variety of use-cases. There is significantly less need to modify existing interfaces or stand up new interfaces when a new use-case arises.

However, ad-hoc query means that the data source must have a security model that allows arbitrary queries against data. That does not mean they must allow all data consumers to query whatever they like. However, it does mean that the data source must be able to evaluate a given ad-hoc query and determine whether it is "allowed" for that data consumer and if not, either reject the query or add additional filters to make it acceptable prior to execution. Also, because ad-hoc queries are use-case independent, the data source must make access control decisions without knowing the 'purpose' for which the data is being retrieved. (Though in some cases, the authorization layer might allow capturing an overall reason for whatever actions are taken within a given authorized session.)

If the data consumer does not have the security or technical abilities to perform ad-hoc queries, it might still be worth the investment to develop them, given that the cost can be defrayed over multiple use-cases. When designing the interoperability solution, an evaluation of the long-term costs of building a generic, re-useable query mechanism vs. building and maintaining multiple purpose-specific mechanisms should be evaluated. A focus on short-term costs may result in larger overall costs.

An additional consideration is that ad-hoc queries can create performance concerns for data sources. Allowing arbitrary search expressions means that it is possible that some queries will perform poorly or will overly tax the data source, potentially impacting performance of other services. And obviously, running ad-hoc queries means that the data source must have a data access layer that can execute ad-hoc queries. Systems that provide a facade over top of a non-FHIR system, that integrate data from numerous systems or have limited control over the indexing of their data source might not be appropriate for ad-hoc query interfaces.

A final consideration is whether the data desired can reasonably be described by a querying on specific properties of the relevant information. In some cases, the relevant information may require a complex site-specific algorithm or even human intervention to identify the record(s) desired, meaning that ad-hoc query would not meet the requirement.

If ad-hoc queries are appropriate, the next decision point is whether RESTful search is a good fit. If not, then the _query mechanism should be evaluated.

Return to diagram

The most widely implemented search mechanism in FHIR is the RESTful search mechanism. It defines mechanisms for retrieving resources along with related resources, filtering what resources are returned, ordering the result-set, and allowing data to be returned in individual pages. It is a generic mechanism that is use-case independent. As such, once a data source can respond to a query, it may (if it wishes) provide that data to any data consumer, regardless of the purpose for which the information is needed.

Details on performing exchange using REST search are found here. If REST search is appropriate the next question is to whether to support synchronous or asynchronous search. Otherwise the next option to evaluate is batch search.

Return to diagram

In some cases, an individual RESTful query is inappropriate because resources need to be retrieved from multiple resource endpoints and the search criteria for each endpoint must be different, so that a generic search against the base endpoint would be inappropriate. In other cases, even when hitting the same endpoint, different subsets of resources need to be retrieved using distinct combinations of search parameters. While _filter could potentially be used to specify various bracketed clauses to express all the constraints in a single filter, it may be easier (and more widely supported) to simply submit several independent queries. These can be done as independent calls; however, it can be more efficient to submit all of them at once. This can be done using a batch Bundle to invoke multiple queries simultaneously, and return a corresponding batch-response Bundle containing the results of each search. Note that if performing all queries simultaneously, the search parameters of the queries must be independent. It is not possible to base the parameters of one search on the results of a previous search. If this sort of dependency is needed, the searches will have to be invoked independently, with the second search launched only after the results of the first are back.

If batch searching is appropriate, details on the process can be found here. As well, the design needs to consider which resource retrieval mechanisms will be used to populate the batch. Finally, a decision is necessary about whether the Batch search should be synchronous or asynchronous. If batch not appropriate, the next architectural approach to consider is using _filter.

Return to diagram

The _filter search mechanism provides additional capabilities beyond what can be expressed using basic REST search. Specifically, it provides full nested logical expression capabilities (and/or/not/parenthesis) and allows arbitrary filtering of chained parameters (e.g. Observations for patients where the legal name starts with 'Bob'). It also supports a slightly broader set of comparison operations (e.g. "ends with"). _filter can be used together with base REST search capabilities, meaning that _include, _revinclude, _sort, etc. This mechanism is treated as 'separate' from basic REST search because support for _filter tends to be less than other REST search parameters. Unlike other REST search parameters (for which support is defined on a per parameter basis), if _filter is supported, the expectation is that the full _filter expression language is available and supported.

If _filter searching is appropriate, details on the process can be found here. Like any other search mechanisms, implementers must also determine whether to invoke the request as synchronous or asynchronous. If _filter is not a good fit, the next architectural approach to consider is using CQL.

Return to diagram

CQL icon stands for Clinical Quality Language. It is a high-level language that supports manipulation of healthcare data. CQL is a full-blown Turing-complete programming language that allows filtering and generation of data structures. It is not tightly bound to FHIR, though it does rely on FHIR data types. It includes the ability to execute sub-queries, define modules and functions, import external libraries, etc. However, it is a language specifically designed to manipulate data structures and includes rich terminology support. It builds on the FHIRPath icon language to allow easy navigation, filtering, and execution of mathematical and collection-based operations on hierarchical and linked data structures. Because it is a 'complete' programming language and has an internal capacity to execute sub-queries, it allows the definition of arbitrarily complex data searches.

The primary downside of CQL is its complexity. It is closer to using Java to retrieve data than SQL, though far more tuned for data manipulation than Java is. Because of its complexity, it is not widely supported in the FHIR implementation space, though it is supported by some of the FHIR reference implementations. Also, as a programming language, it can raise the risk of performance issues for systems that choose to allow execution of arbitrary CQL against their repositories. It is also much harder to analyze the 'safety' of a specific ad-hoc query in terms of whether it violates access permissions associated with a user's privileges.

At present, the only way to invoke a CQL search is by using operations. This can be done either by passing the CQL directly (the cpg-cql icon operation) or by referencing a library that contains the relevant CQL (the cpg-library-evaluate icon). The latter is more useful when the CQL is complex.

CQL appears in both the 'full resource' and 'individual data elements' side of this decision tree because it can return results in either form. If CQL is appropriate, discussion on its use can be found here. The CQL operations can be invoked using either a synchronous or asynchronous approach. If CQL is not appropriate, the next option to explore if returning resources is _query and SPARQL if returning individual data elements.

Return to diagram

The _query search is an alternate way of invoking a custom operation. That operation inherits the general RESTful search capabilities in terms of paging and returning a search Bundle and thus can piggy-back on top of generic RESTful search infrastructure. However, the parameters used by the search are defined by a custom OperationDefinition. This approach is not as wide open as using the generic operation approach to data retrieval because all parameters must be expressible as URL parameters and the operation cannot be one that affects state.

However, beyond those constraints, the search can do anything the author can imagine so long as the result is a search-set. The downside of using 'query' is that the query operations are 'custom'. They will only be supported by implementers that add custom code to enable that specific query. Changes to the query definition need to be coordinated to with implementers and new requirements are likely to result in the need for a distinct 'query' - which again will require negotiation with and custom coding by other implementers. As a result, this approach offers limited re-use and scalability.

If a query operation will meet the need, details on the process can be found here. Using query also requires a decision on whether to use synchronous or asynchronous search. If query is not a good fit, the next option to evaluate is messaging vs. operations.

Return to diagram

GraphQL is a query mechanism that returns a custom JSON structure that contains specific data elements of interest from a resource or set of related resources. It can be more efficient than a standard RESTful query response because it does not need to include the full resource structure, mandatory elements or have the overhead of a Bundle resource. It also allows data structures to be 'flattened', which can be useful for JSON used to drive user interfaces or statistical processes. Finally, it gives greater control over references allowing direct tracing across the specific references to include rather than relying on :iterate to include referenced resources that might occur along multiple paths.

The downside of GraphQL is that it is still experimental (in FHIR) and is not widely implemented. As well, the result-set from a GraphQL query is not standard FHIR and cannot subsequently be stored or shared as a standard FHIR resource. In some cases, key meaning may be lost (e.g. failing to return status and failing to exclude 'entered-in-error' records). Because filters can be FHIRPath icon, indexing requirements are not as locked down as RESTful query where allowed search parameters are defined in advance.

If GraphQL is appropriate, details on its use are here. If using GraphQL, the underlying server would typically also use FHIR's read, search or operation capabilities to select the base resource(s) against which the Graph will be evaluated. As well, consideration will need to be given to whether the GraphQL calls should be synchronous or asynchronous. If GraphQL will not meet the need, the next option to explore is CQL.

Return to diagram

SPARQL icon is a general-purpose query language for accessing data expressed in RDF (as tuples). Its primary benefit is that it can query across data using multiple ontologies - so rather than just relying on FHIR structures, the query language can rely on types and inferences from terminologies such as SNOMED and rules or knowledge sources that assert meanings to specific combinations of data. Properly designed, SPARQL queries can be exceptionally powerful. However, such powerful queries can rely on data being monotonic and consistent, which is not necessarily typical in the way FHIR exposes data or in the healthcare space in general. Also, SPARQL requires data to be expressed in RDF/Turtle, which is not as widely supported as XML and JSON. Finally, SPARQL skills are not widespread and support for SPARQL in existing FHIR interfaces is low to non-existent.

If SPARQL is appropriate, discussion on its use can be found here. Otherwise, the next options to explore are operations and messages.

Return to diagram

If data is going to be 'pushed' from data source to data consumer, the next question is whether the consumer can use FHIR-based mechanisms to control what sorts of data will be shared and in what circumstances. Data consumer control adds complexity for both systems, but eliminates the need to hard-code rules for what data should be shared with whom or for human intervention to configure the data source to supply new data or to stop supplying old data. This elimination of human intervention allows data consumers to be more adaptive in their behavior. They might subscribe to special topics of rare interest because they can, where it would not make sense to contact the data source to configure a specific feed. Alternatively, the data consumer might adjust polling frequency for different types of data based on application or user priorities.

Another consideration is that all the consumer-configured data sharing approaches give the data consumer full control over what happens to the data once received (i.e. what, if anything, gets persisted and where). However, most of the sharing mechanisms that do not provide consumer control leave the decision of what data is persisted by the data consumer up to the data source.

If the pushed data will be configured by the data consumer, the next question is whether subscription is viable. If the data consumer will not be involved in determining what data gets pushed, the next question is whether there is a direct connection between data source and data consumer.

Return to diagram

This consideration is similar to the one for pull interactions - can the data consumer and data source talk to each other directly. The decsion points are the same, however, the outcome of the choices is sligtly different. If direct or proxied communications are possible, the next consideration is whether the data source will direct persistence. If communications must be routed, the only option for exchange is notification-based messaging.

Return to diagram

Data consumer-configured exchanges can take one of two forms - subscription or polling. Subscription offers several benefits:

  • It uses less bandwidth and processing power because there is only an interaction between data source and data consumer when data is ready to flow, while polling requires regular interactions, even when there is nothing to send
  • Subscription scales better. Pushing out an event to 100+ interested parties is much more managable than responding to regular polling queries from 100+ systems
  • Subscription offers the potential of more immediate notification when an event occurs. With polling, the average time to receive a notification is half the polling interval and the maximum time to receive a notification is the full polling interval. (With shorter polling intervals negatively affecting bandwidth and processing load.)

However, subscription requires more sophisticated infrastructure on the part of both the data source and data consumer. The data source must expose a subscription end-point and must build in an event-detection mechanism into its processing or persistence layer such that it can trigger notifications when an event of interest occurs.

Subscription also involves an enhancement of the data source's security model because the authorization that is in place at the time the subscription is established will not necessarily be the same as what is in place when the subscription triggers a notification. For example, if a subscription is established with patient consent conveyed via an OAuth token, it is unlikely that the OAuth token will still be valid during the subsequent time-period when event notifications are triggered by the subscription. Consents may have changed, user privileges may have changed, etc. Also, the subscription notifications will be directed to a 'system', not necessarily a 'user'. The security design of the data source will have to take these differences into account.

On the data consumer side, the system must be able to receive notifications at any time. This imposes an availability and 'interrupt' capability that not all systems will possess. The system must also be able to route the received information to the appropriate user/module for the information to be used.

If subscriptions are appropriate, the next choice is whether the subscription notifications should include data. If subscriptions are not viable, the appropriate solution is polling, which is described here. If polling is used the data consumer must use one of the existing resource retrieval mechanisms to actually perform the polling.

Return to diagram

There are three models of subscription behavior once an event that meets the criteria of the subscription has occurred:

  1. The data can be transmitted directly from data source to data consumer
  2. The resource id of the relevant record(s) can be transmitted, but not the actual resource(s)
  3. No data at all is exchanged beyond a notification that "new data meets the subscription criteria"

(In all cases the notification indicates the subscription topic the notification is for and how many new events have occurred since the last notification and other metadata such as whether there have been errors.)

The first approach is most efficient because no subsequent action is required by the data consumer to retrieve the data. It is received directly. However, this approach comes with a few challenges:

  • Because the actual (and potentially sensitive data) is shared at a time when no specific user is asking for it, data source must determine the 'permission' of the data consumer to access the data without a user token and outside the bounds of the usual query mechanism.
  • The data consumer gets the 'default' set of information (typically the bare resource without any related resource) with no ability to refine the data shared.
  • The channel used to convey the subscription response would typically need to be a secure one, unless the architects of the exchange have concluded that exposing the data to manipulation or disclosure by an intermediary would not result in negative consequences.
  • The data consumer must be capable of receiving and processing the data at an arbitrary time - even when the application is busy doing other things. This requires a greater degree of application sophistication.

The second approach means that the data consumer will have to perform a read or search to retrieve the actual record. This adds an extra interaction to the exchange, but - if the search mechanism is used - gives the consumer greater control of the data returned by using _elements, _include and/or _revinclude. It keeps the load on the data source low because the subsequent read/query is by id which imposes minimum retrieval costs. Because only the id is exposed, security requirements are much less - no personally identifying or sensitive health information is exposed.

The final approach is the least work for the data source initially - it does not even have to identify the impacted records, only that something has happened. However, it is more work subsequently as the data consumer must use appropriate query parameters to find the relevant data - it cannot search by id. Other benefits are like the second approach.

Details on how to use subscriptions with push notifications are here. Subscriptions where the initial notification must be followed by a query (whether the notification was a resource id or nothing at all) are described here.

Return to diagram

When data is pushed from a data source to a data consumer, there are two possible modes of sharing. Either the data source specifically instructs the data consumer to create, update, or sometimes delete records; or the data is provided to the data consumer with no expectation as to what information, if any, the system will persist or where.

To be able to direct persistence, the data source needs to have a sense of what data is already persisted in the data consumer so that decisions as to whether to create or update can be made. For updates, the data source may also need to know the etags associated with the existing records to avoid collision control. In some cases, this knowledge will exist because the data source is the sole source of information for the data consumer and thus the source can track what information the consumer has - and where. Mechanisms such as conditional create and conditional update can handle some aspects of this as well. In other cases, the data source will need to query the data consumer to see what data exists.

In addition to an awareness of the existing data, the data source would also need a degree of authority to decide what information the data consumer should store and where. While the data consumer can always refuse a create or update request, this means that essentially the data source determines what gets created/updated or the data consumer does not get the data. Whether this is appropriate is dependent on the relationship between data source and data consumer.

  • The data source explicitly instructs the data consumer to create or update specific records.

If the data source will direct persistence in the data consumer, the next question is whether the data will be stored as a group. If the data source will not be directing persistence, then the next decision point is evaluating the appropriateness of messaging vs. operations.

Return to diagram

In most cases, FHIR resources are managed and stored independently. This allows the resources to be queried, aggregated into documents or messages, and otherwise used to support a variety of use-cases. Reference elements refer to other resources that are independently stored (potentially on different servers). Clients traverse the references via query to access the information they happen to care about. However, in some cases, there is a need to 'package' a set of resources together and store it as a collection. This ensures that modifications are always done in the context of the collection and that a specific related set of resources are always accessed 'together.

Persisted collections tend to be use-case-specific because the decision about what resources should fall within the collection vs. what should be handled as external references is dictated by the nature of the use cases. Storing resources independently allows them to be used in whatever manner any arbitrary use-case requires.

If the need is for resources to be stored as a group, the next question is whether there is a need to support presentation and/or story-telling. If the resources will be stored individually, the next question is whether there is a need to transmit resources as a group.

Return to diagram

Even when the expectation is that resources will be stored and potentially accessed independently, FHIR defines mechanisms for transmitting multiple resource actions (creates, updates, patches, deletes, operations, etc.) as a collection. This can reduce the amount of back-and-forth traffic between the data consumer and data source and reduce some of the communication overhead. The impact on bandwidth will be slight as the volume of 'resource' data will be the same (and in fact, slightly increased, due to the additional size of the Bundle resource needed to package the various requests) and there will be a reduction in repeated transmission of HTTP headers, security tokens etc. The primary savings is in the communication overhead associated with processing requirements that happen once for each transmission (e.g. authorization verification, SSL handshakes, etc.) This savings can be significant.

Obviously, sending multiple actions in a single Bundle can only happen if the use-case requires the transmission of multiple requests. However, each of the actions must be reasonably independent. Depending on the mechanism, it can be possible for resources to reference other resources contained within the same Bundle, but it is not possible to have conditional actions that depend on the results of other actions performed within the same Bundle - separate RESTful calls must be done and the data consumer must evaluate the first and decide what to do in a subsequent request.

One final consideration is that batch and transaction processing require additional technical capabilities, particularly transaction - which requires treating multiple actions as a single unit of work. As a result, support for batch and transaction is lower than support for individual RESTful actions.

If group transmission is needed, the next question is whether transactional behavior is required. If there is no need for group transmission, then the next decision is whether the record will be new to the data consumer.

Return to diagram

When doing simple REST actions (on their own or as part of a batch or transaction, the data source needs to figure out whether the appropriate action is a create or update. That is driven by whether the data to be shared already exists as a record on the target system. In some cases, the data source will know this automatically (e.g. because the information in question is brand new and there is no chance the data consumer could have a copy; or because the data source is the data consumer's only source of information and the data source has not previously sent the record in question. In other situations, the data source may need to query the data consumer to see if the record is already present (and if so, what its identifier is).

If the record does not already exist and the data source does not need to assign the resource identifier, the data source will use POST to create the record. Details on 'create' are found here.

If the record already exists or the data source will be assigning the resource id, the next choice is whether to transmit the whole resource.

Return to diagram

Normally when transmitting an update (to revise a resource or create the resource with a specified id), the entire content is transmitted. However, if updating an existing resource, it is also possible to transmit only the changes made using patch. This could use considerably less bandwidth if the changes made are small in comparison with the overall size of the resource. Depending on the type of information changed, may also result in less processing by the data consumer. However, the 'patch' mechanism is less widely supported than update, so it may be necessary to fall back on 'update'.

Details on 'update' are found here. Details on 'patch are found here.

Return to diagram

When performing a group of transactions, HL7 defines two mechanisms: batch and transaction:

  • With a batch, all the operations are considered to be independent. Some might succeed, others might fail. There cannot be any references between resources manipulated in different Bundle entries such that failure of one action would cause a different action to be invalid. There should be no dependency to the order in which the actions are performed.
  • With a transaction, if any of the actions fail, all of them will fail and none will have an effect on the repository. The actions must be performed in a specific order based on type of action. Resources are permitted to reference each other.

If transactional behavior is required, then the exchange will use FHIR transactions, which are defined here. Otherwise, exchange will use FHIR batch, which is described here. In either case, the data source will need to determine what actions to include within the Bundle.

Return to diagram

When a group of resources needs to be stored in FHIR, this is always done using the Bundle resource. There are two types of Bundle that are intended to allow a group of resources to be persisted together: collection and document.

A collection is quite flexible. It can store any arbitrary set of resources with no required organization and no semantic other than "this is a bunch of stuff someone decided to store together". It might be used to store a collection of examples, test cases, implementation guide source or any other set of resources that needs to be persisted as a group.

FHIR documents are different. The contents of the Bundle must adhere to a specific set of rules, both in terms of how they're constructed and how they should be handled when received. One of the main differences is that a FHIR document provides specific information about how the information should be presented to a human, including in what order. This is important when there is a need to "tell a story" - i.e. where a human reader needs to consume information in a specific way in order to have a consistent (and complete) understanding of what the collection of information represents. This is commonly found in artifacts such as pathology reports and discharge summaries. While it is possible to jump right to the 'recommendations' part of the document, those recommendations will not necessarily make much sense unless the reader had first gone through the content that comes before.

If there is a need to control presentation or provide storytelling, then the exchange will be using FHIR documents, the process for which is described here. Otherwise, the exchange will use 'collection' bundles and the process for that is described here.

Return to diagram

The 'bottom' of most parts of the decision tree is a choice between operations and messaging. Both approaches rely on defining custom request structures that are passed in response to a specific event and responded to with a corresponding custom response. The requests and responses can be FHIR resources, or - through the use of the Parameters resource, individual data elements. These options fall lowest on the decision tree because they require custom development by both data consumer and data source. However, there are still use-cases for which they are most appropriate.

Messaging and operations are functionally equivalent. I.e. Both mechanisms can be used to accomplish identical functions. However, there are small differences that may make one more appealing than the other in specific architectural circumstances:

  • Messaging provides explicit support for routing (the initial recipient of the message might not be the intended data consumer/source - and the message may pass through multiple systems before it reaches the intended recipient)
  • There may be standard operations defined in the core specification (e.g. Bulk Data with $everything), in which case they should be used preferentially to messaging. So far, the FHIR community has shown little interest in standardizing messages at the international level, which itself suggests an implementer preference for operations
  • Operations are slightly lighter weight (no need for MessageHeader)
  • Messaging allows multiple operations to be communicated with a single endpoint, so if there is a cost to establishing distinct endpoints, messaging might have an advantage.
  • FHIR Messaging may integrate more easily with back-end systems that are messaging-based
  • Some regulatory environments may mandate an architectural approach
  • Asynchronous delivery of operation results requires polling, whereas messaging does not necessarily

Note: It’s possible to replicate a messaging approach using a custom operation (i.e. passing in a Bundle of linked resources and getting back a Bundle of linked resources where one of the resources indicates what the 'event' of the operation is. This is discouraged as it essentially creates a ‘custom’ mechanism to do something where there is already a standard operation ($process-message).

If the principle reason for using messaging is to gain access to the routing behavior (because the data consumer and data source cannot talk to each other directly), it is possible to define custom messages that emulate the behaviors of the other exchange mechanisms, wrapped in the messaging infrastructure to allow routed delivery. I.e. If messaging is selected only for routing purposes, it may be relevant to loop through the tree again to determine an interoperability approach that should be wrapped by the messaging exchange mechanism.

There are separate pages that describe operations and messaging, however there are also sections within those that are related specifically to the use of operations and messaging in 'pull' vs. 'push' situations:

  • Pull operations or 'retrieval' operations are described here, while query-type messaging is described here
  • Push operations or 'processing' operations are described here, while notification-type messaging is described here

Note: With both messaging and operations there is also a need to determine whether support will be synchronous or asynchronous.

Return to diagram

In synchronous queries, the results are determined and returned within the HTTP timeout period of the initiating GET. The application will typically block and wait for a response before continuing with its action. While it is possible to have very long timeout periods (100 seconds plus), typically the intention with synchronous responses is to return in tens of milliseconds to a few seconds at most to ensure appropriate system performance and user experience. In cases where a search may take longer than this, the FHIR asynchronous search mechanism can be used to allow a query to be initiated and subsequently monitored until a response is available - even if this takes minutes, hours, or potentially even days. This may be necessary if the search is extremely complex or is slow for other reasons (e.g. if it involves accessing data sources with high latency or limited availability).

Details on invoking and processing RESTful requests in a synchronous manner is described separately for each major type of exchanges separately. For RESTful exchanges, the synchronous flows are here and here. The asynchronous flow is covered here. For search the synchronous process is here and the asynchronous process is here. For operations, synchronous is here and asynchronous is here. Finally, for messaging synchronous is here and asynchronous is here. For all except messaging, the asynchronous process is similar, however there are small differences - such as who initiates the process.

For an example of using an asynchronous approach, see the Bulk Data Access IG icon.

Return to diagram

Irrespective of how data is exchanged using FHIR, data from a source system must be represented in a FHIR resource. In the end, this means matching the available data to a set of elements with primitive data types, and writing the available data into the primitive elements.

When it comes to populating primitive data types with existing data, an authoring system must consider a range of scenarios, depending on the data that is available:

Scenario Choices
expected: The system has the expected data The system can populate the value, and does. This is the ideal happy path, and straight forward for the authoring system.

Note that it might not always be appropriate to share the data even if it is available - the authoring system always has to consider the applicable security and consent rules around sharing the data with the recipient(s) of the data, under the applicable rules.
extends: The system has more data In this case, the system has additional data that doesn't have a place in the existing primitive data. For example, the system has a public id for an address.

In this case, the authoring system can add extensions to the objects that contain the elements. (Note that it can also extend the primitive elements with extensions themselves, but extending complex objects is always preferred).

Applicable profiles may makes rules about what extensions are allowed in the context, so implementers should consult them first. Otherwise, implementers should look through the extension registries (FHIR Standard and FHIR Registry icon) for appropriate extensions before creating their own.
more-details: The system has finer grained data In this case, the system has additional data about the existing values that doesn't have a place in the existing primitive data. For example, the system has a name for a timezone on a dateTime with a timezone.

In this case, the authoring system can add extensions to the primitive elements. (Note that it can also extend the primitive elements with extensions themselves, but extending complex objects is always preferred).

Applicable profiles may makes rules about what extensions are allowed in the context, so implementers should consult them first. Otherwise, implementers should look through the extension registries (FHIR Standard and FHIR Registry icon) for appropriate extensions before creating their own.

Note that the difference between "extends" and "more-details" is a grey zone and a matter of judgment by the system designers, and there is nowhere in the FHIR specification or eco-system that requires this call to be correct or consistent, though consistency is encouraged to make life easier for other implementers.
missing: The source data is not available If the element minimum cardinality is 0, then the element can simply be omitted. If the element mininmum cardinality is 1, then the authoring system has a problem. In principle, the first question is why is an element that there is no data for mandatory? The authoring system may be required to change its own workflow to procure a value for the element. However this is not always necessary (or possible).

The first possibility is that that authoring system can provide a fixed value that is known to be true based on the application's workflow. E.g. A MedicationAdministration status is always 'completed' because that's the only kind of administrations the system deals with. Applications should carefully consider failure pathways before filling out a fixed value.

Alternatively, the authoring application can provide an element with no value, and use one of the missing value extensions (data-absent-reason or nullflavor). Applications can use their own extensions for this as well, but this is strongly discouraged, as those existing extensions should cover all the reasons why data is missing.

See how this looks in XML and JSON.

Applicable profiles may use the ElementDefinition properties mustHaveValue or valueAlternatives to disallow the use of extensions like this, or control which extensions are allowed in place of a value. Additionally, applicable profiles may constrain the allowable extensions.

If the applicable profiles prohibit missing data, the application must be redesigned. If this isn't possible, then that's an impasse whose solution is outside the scope of the FHIR specification.
invalid: The available data is not allowed This is the most difficult case, where the data this is available is not valid under the base resource design or the applicable profiles. A common example of this is a source that allows plain text or invalidated dates in place of real dates for clinical events, but there are a variety of variants of this; some of the variants are discussed below. This might also happen where an applicable profile has specified a fixed or pattern value that is not correct for the source system.

In general, specification and profile designers try to avoid the situation where real-world systems have data that is invalid, but all designs are a trade-off between different pros and cons, and systems do not always have valid data. Again, one solution is to redesign the system or the clinical workflow to ensure that data is always valid, but this isn't always possible e.g. due to the existence of legacy data.

Applications are able to provide alternative invalid data using the originalText extension in place of a value. This signifies to any consuming application that a valid value is not available, but if they wish, they are able to present the actual original data value to a human user along with the information that it can't be processed.

Again, applicable profiles may use the ElementDefinition properties mustHaveValue or valueAlternatives to disallow the use of extensions like this, or control which extensions are allowed in place of a value. Additionally, applicable profiles may constrain the allowable extensions.

If the applicable profiles prohibit invalid data replacement like this, the application must be redesigned. If this isn't possible, then that's an impasse whose solution is outside the scope of the FHIR specification.
invalid-code: The data has a code that is invalid This is a relatively common situation; conflicts around coding granualarity are ubiquitous.

If the applicable profiles include a required binding to a code/coding type, then the source system must choose one of the allowed codes from the relevant value set. It can still provide its own code in an extension, and in general this is encouraged. There is no standard extension for this purpose; look for applicable extensions in the FHIR Registry icon.

If the binding is a required binding to a CodeableConcept, then the source system must choose one of the applicable codes and put that in one of the codings, and then it should provide its own code in another coding. The same is true if the binding is an extensible binding, and one of the existing codes is applicable.

As above, applicable profiles may influence an applications choices in this regard.
too-long: The data is too long This happens with data where on of the applicable profiles has limited the length of a data value using ElementDefinition.maxLength. E.g. the patient's family name is limited to 80 characters, and the source system has a name longer than that.

Length limitations like this are indicative of an underlying database field size limit. The FHIR eco-system and development approaches generally are moving away from using databases like this, but there are still many legacy systems that work this way.

The FHIR specification has no formal approach to resolve this, but implementers may wish to copy the approach of V2, where fields that are too long are truncated at N-1 characters, and the character # is appended.

In addition, applications are able to preserve the correct untruncated text using the originalText extension.

All these approaches have additional pre-requisites to their successful use:

  • The data consumer and data source must know each other exist and have a means of connecting. For most FHIR interactions, this means that at least one of the two systems must know the base URL of the other. In some cases (where the data flow involves initiation on both sides), both will need the base URL of the other. Even if using messaging or proxies where a direct web connection is not necessary, the initiator needs to know a) that an appropriate recipient exists; and b) where to initiate the action necessary to cause the intended recipient to receive it.

    This awareness could be created through manual configuration, a registry of EndPoint or CapabilityStatement instances, or by a user providing the information directly

  • In addition to mere awareness of each other's existence, the data source and data consumer need to have a sufficient trust relationship to share data. Specifically, both the systems and their relevant end users must have the necessary permissions to perform whatever actions the exchange workflow demands on each other's systems. Frequently, this will be predicated on the existence of legal agreements between the respective organizations - especially if the information to be exchanged is sensitive.
  • The necessary security steps to authenticate the systems to each other, authenticate any users involved, authorize both users and systems, and protect the data while in transit will all need to be in place. In some cases, Consent may also need to exist. Authorization rules may vary by patient, by type of data and by tags or information within the record. Discussion about general expectations around security, privacy and consent can be found here. Note that not all data needs to be secured. Some data might not be legally protected and there might not be significant risk if it is accessed or even modified in transit. Implementers are strongly encouraged to consult with the legal and compliance divisions of their organizations to ensure appropriate security and authentication measures are put in place prior to data exchange.
  • In at most situations, there needs to be agreement between the data consumer and data source about how a specific type of exchange should happen. Even if the relevant party provides permission for an exchange to happen a specific way, if they aren't expecting that mechanism to be used, the workflow might not be in place to ensure the request or data gets exposed or processed in the right way. Typically, these agreements will come in the form of mutually adopted FHIR implementation guides. However, site-specific agreements are also possible, especially for new or experimental functionality.
  • The data source needs to have the needed data in a way that will allow the selected mechanism to retrieve it. In the healthcare system, this is not guaranteed. Interoperability solutions should always consider the possibility of data being unavailable and have fallback processes in place.

This page does not provide guidance about when SMART on FHIR icon would be appropriate for use because SMART on FHIR is about user interface integration and access control. The data exchange mechanisms it uses are those already described in this specification. However, implementers who are not familiar with SMART should certainly review its capabilities and decide if/where it is appropriate for use in their product(s).

  翻译: