Implementation FAQ:Interface Development
This page contains questions and recommendations related to the architecture of the implementation.
Back to Implementation FAQ
- 1 Questions
- 2 Recommendations
- 2.1 Create site-specific use-cases and storyboards
- 2.2 Don't have native HL7 version 3 support in the application itself
- 2.3 Support OIDs/UUIDs within the application
- 2.4 Multi version support in the application
- 2.5 Adopt HL7 v3-like models within the application
- 2.6 Migrate to unified Identification schemes
- 2.7 By default message(fragments) should not be regarded as persistent objects
- 2.8 Receiving artefacts is much more difficult than sending them
- 2.9 Logging of transmissions
- 2.10 Rollback the process if message fails
- 2.11 Keep synchronization of data under control
Question: how should one best write an Implementation Guide?
- technical guide, conformance profile guide, wrapper documentation
- use plenty of (good) examples. The creation of good quality examples takes a lot of time.
- use of local language within the implementation guide helps the readers to understand its contents.
Performance in large volume environments. There are two dimensions to this:
- sheer quantity of messages being sent
- the complexity (in time and space) of processing each message.
(Ann Wrightson, CSW, Feb.2005) the latter issue is the main killer problem; it's not just that more hardware or time is needed, but large & complex-structured messages (as occur easily in full-dumps) can break common XML tooling.
Strategies for (2) include:
- Without changing the XML
- cut-down non-validating processing banking on (hopefully justified) assumptions about the uniform structure of this pack of messages
- more respectable variant of previous: use an alternate simpler schema that happens to validate a set of instances that fit the datstream & also conform to the target HL7-compliant schema (this is v. useful but also dangerous as this relationship between schemas is not possible to prove & difficult even to check informally)
- Changing the XML, en route or at source
- inbound translation to a more tractable XML format before the messages hit the performance bottleneck
- more flexible component-based generation of (messages and) schemas from models, with more (model-driven, parameterized) variation allowed in naming of elements (see next but one point below)
- firmer control of complexity in the XML from an XML processing point of view (even some metrics?)
- naming of elements strongly favoured over parameters for identifying the nature of element content (it takes one logical-look at the XML per element to filter out elements of type XZ; it takes two logical-looks & more processing to find those of type X with attribute Y having value Z
Create site-specific use-cases and storyboards
Document the existing workflows and business rules before choosing what HL7 v3 artefacts to implement. This could be done in the form of unstructured examples of existing data transports. These can subsequently be mapped to existing HL7 v3 artefacts. The mapping between business-events and HL7-events is often not a 1 to 1 relationship. This should be the first step in *any* implementation; it precedes the development of applications and interfaces. Actors: business experts, domain experts, messaging specialists and software architects.
Don't have native HL7 version 3 support in the application itself
Specifications evolve. Do add a layer of abstraction, in the form of an intermediate XML based format, and map this to HL7 v3 artefacts, e.g. with the aid of a stylesheet. The structure of the intermediate format should be the best possible mix of the internal database format and the model used by CDA/Clinical statements. Alternatively, use a third-party broker that hides most of the HL7 v3 complexity behind an easy to use API. In this case the API acts as the "intermediate format".
- If you are working with XSLT you need to define an intermediate XML format for your data. This can be in "close to HL7" form or in "close to native" form. It's normally sensible to quickly export your data in a near-native format, and then encapsulate just the HL7 specifics in the XSLT. This means you can target various HL7 messages from the same intermediate file format by varying only the transform.
- You also need to decide which parts of the end to end data processing are done while your application creates the intermediate XML and what is to be done afterwards in XSLT. Some tasks are better handled by traditional compiled languages. Converting between coding systems, for instance, needs large datasets and not well suited to XSLT. You should probably convert clincial codes (eg. from UK Read code to SNOMED CT) using your application code or stored procedures. Conversion of dates to HL7 formats can be done in XSLT but it is fiddly and may be slower than converting at the point the data is written to intermediate XML. Aim to produce an intermediate XML that is suitable for restructuring into HL7, but that no longer requires a lot of complex processing.
- Although this approach is architecturally sound, there are performance issues associated with it. As such its a tradeof between performance and flexibility.
- Another thing to bear in mind is that error handling is very tricky in XSL. XSLT tends to be a one-shot process that either succeeds or fails, and it isn't easy to recover from error conditions or interact with the user in the middle. It's often best to deal with any likely exceptions in procedural code when targetting your intermediate format, and do just the final rendering into HL7 in the XSLT. Rik Smithies 22:09, 7 Jun 2006 (BST)
Support OIDs/UUIDs within the application
Don't try to map local codes to other code sets in the integration layer. Only the generation of non-persistent object identifiers can be outsourced to middleware. OIDs/UUIDs have to be supported by the database of the core application. OIDs/UUIDs are new for most implementers/vendors an may have a significant impact on the application.
- Related issue: length of OIDs/UUIDs (theoretically there is no maximum length). Rene spronk 09:11, 26 Jun 2006 (CDT)
- Related issue: it's a sure sign implementers haven't understood the concept of OIDs (i.e. why they are of importance) if one sees implementations that use the exact same OIDs as contained in example messages provided with the documentation. Rene spronk 09:11, 26 Jun 2006 (CDT)
Multi version support in the application
Do allow for the support of multiple model-versions at the same time; not just in terms of transformations of the intermediate XML based format, but also in terms of database structure and application functionality. Create application behaviours which can be easily changed/upgraded/switched.
Adopt HL7 v3-like models within the application
Create a static model for your database - for that is what you're communicating about. In as far as possile the physical data model of your application should follow the HL7-models. This doesn't mean one should implement the RIM or v3 models as a database structure (although you might follow it selectively), indeed it is unlikely that a interoperability architecture is directly appropriate for use as an application architecture, but there should be a strong and actively maintained relationship between them. The important point is to have a clear mapping from the physical data model onto the logical data model (e.g. D-MIM, R-MIM), and from the triggers used by the application to those used in the interfaces.
- This is especially true when one is developing an entirely new application. Using information models defined by standards (e.g. HL7 RIM) as a starting point (a) makes messaging easier, (b) re-uses the tremendous modeling effort and review has gone into their development (i.e. hundreds of man/years). Your application’s information model will generally be a superset of the standard Message Information Models.
Migrate to unified Identification schemes
For the identification of for key entities such as patients, providers, organizations, aim to use 1 identification scheme over time. This may require that one starts to use (national) identifications as primary keys, without assigning application specific identifiers to these entities.
By default message(fragments) should not be regarded as persistent objects
A Message(fragment) by default is transient in nature and not a piece of persistent data. A message(fragment) may be processed as a persistent object if, and only if, the underlying business rules specify that all communication parties have to process certain categories of message(fragments) as persistent objects (e.g. the use of Clinical Statement Patterns in the English NHS). The situation where one application regards the data in a message as persistent whereas another does not has to be avoided at all cost. If the contents of a message are regarded to be non-persistent then it should be semantically evaluated by a receiving system and imported into the appropriate database structures of the application. Keep in mind that there will probably still be a need for the versioning of data, based on point-in-time.
- It is not possible to provide blanket advice on this topic -- a system that is providing a store and forward service for prescriptions will treat the messages as persistent objects (it does not care about the detail inside the message), and the same will be the case for many systems that only need to directly consume part of the information that they receive. I would suggest that persistence requirements should be identified early - and a failure to fully clarify this should be seen as a risk to implementations. Charliemccay 04:45, 2 Jun 2006 (CDT)
- Agree that a) warning people against using messages as persistent objects, b) if you do so, document such a decision early and be aware of the consequences (e.g. identification issues because of business IDs instead of snapshot IDs in messages) is probably the best aproach for a reworded version of the above. Store and forward is a Transmission feature of a Bridge or Gateway and as such is probably not a proper example. Rene spronk 05:14, 2 Jun 2006 (CDT)
Receiving artefacts is much more difficult than sending them
Getting the semantics right on the receiving end, especially for Clinical Statement based HL7 v3 artefacts, takes a lot of effort. Make sure to get a detailed specification and lots of examples from the sending side.
- Be careful not to 'invent' or imply information that is not in the message. Try to keep in mind whether re-transmitting the converted data would result in the same (or semantically equivalent) message.
Logging of transmissions
A robust tracking and logging system, which logs everything (including wrappers etc.) that which is actually send/received is crucial.
Rollback the process if message fails
If a real-time interaction fails (after timeout) roll-back the message and the transaction; and ask user to repeat the action later. Use an in-sequence reliable delivery method to minimize transactional issues.
Keep synchronization of data under control
Make sure to think about the process of keeping the data synchronized. This requires that one supports various synchronization methods in HL7 (snapshot, update mode). The queuing of updates by various parties if the network is down has to be taken care of. Increase your reliance on queries; they ensure you have the latest data.