This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

Difference between revisions of "Implementation FAQ:Interface Development"

From HL7Wiki
Jump to navigation Jump to search
Line 53: Line 53:
 
:Another thing to bear in mind is that error handling is very tricky in XSL. XSLT tends to be a one-shot process that either succeeds or fails, and it isn't easy to recover from error conditions or interact with the user in the middle. It's often best to deal with any likely exceptions in procedural code when targetting your intermediate format, and do just the final rendering into HL7 in the XSLT. [[User:Rik Smithies|Rik Smithies]] 22:09, 7 Jun 2006 (BST)
 
:Another thing to bear in mind is that error handling is very tricky in XSL. XSLT tends to be a one-shot process that either succeeds or fails, and it isn't easy to recover from error conditions or interact with the user in the middle. It's often best to deal with any likely exceptions in procedural code when targetting your intermediate format, and do just the final rendering into HL7 in the XSLT. [[User:Rik Smithies|Rik Smithies]] 22:09, 7 Jun 2006 (BST)
  
:'''I strongly disagree with this recommendation.''' RIMBAA is about the use of an intermediary format, namely the RIM. From RIMBAA perspective this recommendation should probably read "Use RIM as the intermediary format in your application.". No one should reinvent the wheel, right? [[User:Michael.vanderzel|Michael]]
+
:'''I strongly disagree with this recommendation.''' RIMBAA is about the use of an intermediary format, namely the RIM. From RIMBAA perspective this recommendation should probably read "Use RIM as the intermediary format in your application.". No one should reinvent the wheel, right? [[User:Michael.vanderzel|Michael]] 08:05, 16 May 2010 (UTC)
  
 
===Support OIDs/UUIDs within the application===
 
===Support OIDs/UUIDs within the application===

Revision as of 08:05, 16 May 2010

This page contains questions and recommendations related to the architecture of the implementation.

NOTE: this page will undergo a major update; most of the current information is over 4 years old. See Schema based code generation and MIF based code generation for up to date guidance when it comes to the software implementation of HL7 v3.

See also Implementation FAQ:processing v3 schema with standard tools

Back to Implementation FAQ

Questions

Implementation Guide

Question: how should one best write an Implementation Guide?

  • technical guide, conformance profile guide, wrapper documentation
  • use plenty of (good) examples. The creation of good quality examples takes a lot of time.
  • use of local language within the implementation guide helps the readers to understand its contents.


Performance in large volume environments. There are two dimensions to this:

  1. sheer quantity of messages being sent
  2. the complexity (in time and space) of processing each message.

(Ann Wrightson, CSW, Feb.2005) the latter issue is the main killer problem; it's not just that more hardware or time is needed, but large & complex-structured messages (as occur easily in full-dumps) can break common XML tooling.

Strategies for (2) include:

  • Without changing the XML
    • cut-down non-validating processing banking on (hopefully justified) assumptions about the uniform structure of this pack of messages
    • more respectable variant of previous: use an alternate simpler schema that happens to validate a set of instances that fit the datstream & also conform to the target HL7-compliant schema (this is v. useful but also dangerous as this relationship between schemas is not possible to prove & difficult even to check informally)
  • Changing the XML, en route or at source
    • inbound translation to a more tractable XML format before the messages hit the performance bottleneck
    • more flexible component-based generation of (messages and) schemas from models, with more (model-driven, parameterized) variation allowed in naming of elements (see next but one point below)
    • firmer control of complexity in the XML from an XML processing point of view (even some metrics?)
    • naming of elements strongly favoured over parameters for identifying the nature of element content (it takes one logical-look at the XML per element to filter out elements of type XZ; it takes two logical-looks & more processing to find those of type X with attribute Y having value Z

Recommendations

Don't structure data at too detailed a level

There is a danger that one structures/wishes to convey the data at too detailed a level. Sometimes conveying a snippet of taxt (e.g. in Act.text) instead of a fullblown finegrained clinical statement is fine. Too finegrained a structure delays implementation, and makes uptake by others more difficult.

Create site-specific use-cases and storyboards

Document the existing workflows and business rules before choosing what HL7 v3 artefacts to implement. This could be done in the form of unstructured examples of existing data transports. These can subsequently be mapped to existing HL7 v3 artefacts. The mapping between business-events and HL7-events is often not a 1 to 1 relationship. This should be the first step in *any* implementation; it precedes the development of applications and interfaces. Actors: business experts, domain experts, messaging specialists and software architects.

Don't have native HL7 version 3 support in the application itself

Specifications evolve. Do add a layer of abstraction, in the form of an intermediate XML based format, and map this to HL7 v3 artefacts, e.g. with the aid of a stylesheet. The structure of the intermediate format should be the best possible mix of the internal database format and the model used by CDA/Clinical statements. Alternatively, use a third-party broker that hides most of the HL7 v3 complexity behind an easy to use API. In this case the API acts as the "intermediate format".

If you are working with XSLT you need to define an intermediate XML format for your data. This can be in "close to HL7" form or in "close to native" form. It's normally sensible to quickly export your data in a near-native format, and then encapsulate just the HL7 specifics in the XSLT. This means you can target various HL7 messages from the same intermediate file format by varying only the transform.
You also need to decide which parts of the end to end data processing are done while your application creates the intermediate XML and what is to be done afterwards in XSLT. Some tasks are better handled by traditional compiled languages. Converting between coding systems, for instance, needs large datasets and not well suited to XSLT. You should probably convert clincial codes (eg. from UK Read code to SNOMED CT) using your application code or stored procedures. Conversion of dates to HL7 formats can be done in XSLT but it is fiddly and may be slower than converting at the point the data is written to intermediate XML. Aim to produce an intermediate XML that is suitable for restructuring into HL7, but that no longer requires a lot of complex processing.
Although this approach is architecturally sound, there are performance issues associated with it. As such its a tradeof between performance and flexibility.
Another thing to bear in mind is that error handling is very tricky in XSL. XSLT tends to be a one-shot process that either succeeds or fails, and it isn't easy to recover from error conditions or interact with the user in the middle. It's often best to deal with any likely exceptions in procedural code when targetting your intermediate format, and do just the final rendering into HL7 in the XSLT. Rik Smithies 22:09, 7 Jun 2006 (BST)
I strongly disagree with this recommendation. RIMBAA is about the use of an intermediary format, namely the RIM. From RIMBAA perspective this recommendation should probably read "Use RIM as the intermediary format in your application.". No one should reinvent the wheel, right? Michael 08:05, 16 May 2010 (UTC)

Support OIDs/UUIDs within the application

Don't try to map local codes to other code sets in the integration layer. Only the generation of non-persistent object identifiers can be outsourced to middleware. OIDs/UUIDs (or shorter proxies linked thereto) have to be supported by the database of the core application. OIDs/UUIDs are new for most implementers/vendors an may have a significant impact on the application.

Related issue: length of OIDs/UUIDs (theoretically there is no maximum length). Rene spronk 09:11, 26 Jun 2006 (CDT)
Related issue: it's a sure sign implementers haven't understood the concept of OIDs (i.e. why they are of importance) if one sees implementations that use the exact same OIDs as contained in example messages provided with the documentation. Rene spronk 09:11, 26 Jun 2006 (CDT)

Multi version support in the application

Do allow for the support of multiple model-versions at the same time; not just in terms of transformations of the intermediate XML based format, but also in terms of database structure and application functionality. Create application behaviours which can be easily changed/upgraded/switched.

Adopt HL7 v3-like models within the application

Create a static model for your database - for that is what you're communicating about. In as far as possile the physical data model of your application should follow the HL7-models. This doesn't mean one should implement the RIM or v3 models as a database structure (although you might follow it selectively), indeed it is unlikely that a interoperability architecture is directly appropriate for use as an application architecture, but there should be a strong and actively maintained relationship between them. The important point is to have a clear mapping from the physical data model onto the logical data model (e.g. D-MIM, R-MIM), and from the triggers used by the application to those used in the interfaces.

This is especially true when one is developing an entirely new application. Using information models defined by standards (e.g. HL7 RIM) as a starting point (a) makes messaging easier, (b) re-uses the tremendous modeling effort and review has gone into their development (i.e. hundreds of man/years). Your application’s information model will generally be a superset of the standard Message Information Models.
Also see Implementation aspects of RIM based database models.

Migrate to unified Identification schemes

For the identification of for key entities such as patients, providers, organizations, aim to use 1 identification scheme over time. This may require that one starts to use (national) identifications as primary keys, without assigning application specific identifiers to these entities.

By default message(fragments) should not be regarded as persistent objects

A Message(fragment) by default is transient in nature and not a piece of persistent data. A message(fragment) may be processed as a persistent object if, and only if, the underlying business rules specify that all communication parties have to process certain categories of message(fragments) as persistent objects (e.g. the use of Clinical Statement Patterns in the English NHS). The situation where one application regards the data in a message as persistent whereas another does not has to be avoided at all cost. If the contents of a message are regarded to be non-persistent then it should be semantically evaluated by a receiving system and imported into the appropriate database structures of the application. Keep in mind that there will probably still be a need for the versioning of data, based on point-in-time.

It is not possible to provide blanket advice on this topic -- a system that is providing a store and forward service for prescriptions will treat the messages as persistent objects (it does not care about the detail inside the message), and the same will be the case for many systems that only need to directly consume part of the information that they receive. I would suggest that persistence requirements should be identified early - and a failure to fully clarify this should be seen as a risk to implementations. Charliemccay 04:45, 2 Jun 2006 (CDT)
Agree that a) warning people against using messages as persistent objects, b) if you do so, document such a decision early and be aware of the consequences (e.g. identification issues because of business IDs instead of snapshot IDs in messages) is probably the best aproach for a reworded version of the above. Store and forward is a Transmission feature of a Bridge or Gateway and as such is probably not a proper example. Rene spronk 05:14, 2 Jun 2006 (CDT)
There is a continuity of functionality between a bridge/gateway and a registry. At the very least there is a common functional pattern - in both cases the system accepts data and returns it later, based on the contents of a query or trigger. In both cases there is no transactional behaviour that modifies the contents of the things being registered/forwarded. The distinction between a prescription gateway and a prescription registry is hard for me to see. Charliemccay 05:15, 10 Aug 2006 (CDT)

Receiving artefacts is much more difficult than sending them

Getting the semantics right on the receiving end, especially for Clinical Statement based HL7 v3 artefacts, takes a lot of effort. Make sure to get a detailed specification and lots of examples from the sending side.

Be careful not to 'invent' or imply information that is not in the message. Try to keep in mind whether re-transmitting the converted data would result in the same (or semantically equivalent) message.

Logging of transmissions

A robust tracking and logging system, which logs everything (including wrappers etc.) that which is actually send/received is crucial.

You should note that v3 messages are larger than their v2 counterparts, so message logs will also be bigger. This will impact storage requirements for for logs. Tools that allow analysts to browse message logs will take longer to load the larger logs. You will definitely notice this with high volume interfaces.

Rollback the process if message fails

If a real-time interaction fails (after timeout) roll-back the message and the transaction; and ask user to repeat the action later. Use an in-sequence reliable delivery method to minimize transactional issues.

Keep synchronization of data under control

Make sure to think about the process of keeping the data synchronized. This requires that one supports various synchronization methods in HL7 (snapshot, update mode). The queuing of updates by various parties if the network is down has to be taken care of. Increase your reliance on queries; they ensure you have the latest data.

Consistent OID use when migrating data

Ensure that all Identifiers (including identification of the ID schemes) are persisted when migrating data from one application to another one. "Renumbering" (changing the identifier of objects) can only be done if the original IDs are linked to the new ones. Systems other than the one whose data is being migrated may have stored identifiers (e.g. of clinical data, for later reference). These identifiers have to be persisted, or the object effectively won't be available anymore. In short: persist current identifiers, do not use a renumbering scheme nor change the OID of the identification scheme.

Order in Names and Addresses

When parsing a XML ITS based instance of a v3 artefact, please keep in mind that there are a few datatypes in v3 where the order of the elements is of importance (e.g. names, addresses). This effectively precludes you from using things like XPath expressions. A small snippet of custom code is required to correctly parse them. Note that the "mixed content, ordered" nature of names and addresses is a result from the definition of the underlying abstract datatypes and not just of the XML ITS.

Data Type: NullFlavors

Given that HL7 stresses the importance of semantics, NullFlavors can be used in almost all datatypes. If the use of a nullFlavour has not been explicitely disallowed in the standard or in the applicable implementation guide, then one has to take care to develop the code to deal with the receipt of nullFlavours. Receiving a nullFlavour essentially means having to "throw an exception" to handle it.

Obsolete CDA Documents

When CDA documents become obsolete, (by reason of context) all data which may have been extracted from that document (e.g. Level 3 constructs) become obsolete.

The fact that a document has become obsolete/nullified may be conveyed by various mechanisms, e.g. a medical records message. Consideration should be given if, and how, any data known to be derived from that document should be dealt with. A recommendation may be given in the forthcoming CDA Release 3 specification.


fill the blanks transforms

V3 startup for implementers can be done by creating “fill the blanks” transforms.

This is very pragmatic. I'm not aware that this approach is being addressed in any of the implementation oriented Tutorial. It probably needs to be discussed, as quick-and-dirty implementation methods tend to be used more than theoretically flawless implementations... Rene spronk 12:59, 21 May 2006 (CDT)
This is how I do start-using-V3 implementation tutorials - to do this at home, get Michael Kay's XSLT Programmers Reference (and it would be crazy to even try and spell XSLT without a copy of this on your desk). It has a section on fill-in-the-blanks stylesheets. You take a fully populated message - top and tail it to turn it into a stylesheet, then replace each of the data items with <xsl:value-of ...> or the same wrapped in an <xsl:for-each ...>. This gets you to a working transform from internal XML to hl7v3 XML in very short order. At this stage you sit back and feel good. You then need to extend it with some <xsl:if ...> conditionals to deal with the bits that the example did not cover. Annother glow of achievement. Then work out what to do about the bits that the HL7v3 message needed, but were not in your internal XML. These items should have been identified as you went through filling in the blanks - with big red warnings in the output. Then you have a working output. Pat on the back from other people now, and a chance to read some more of the XSLT Programmers Reference for ideas about how to make the transform modular and do other fancy stuff. Charliemccay 04:35, 2 Jun 2006 (CDT)

Note: If you are working with XSLT you need to define an intermediate XML format for your data. This can be in "close to HL7" form or in "close to native" form. It's normally sensible to quickly export your data in a near-native format, and then encapsulate just the HL7 specifics in the XSLT. This means you can target various HL7 messages from the same intermediate file format by varying only the transform. See Implementation FAQ:Interface Development for details.

Another thing to bear in mind is that error handling is very tricky in XSL. XSLT tends to be a one-shot process that either succeeds or fails, and it isn't easy to recover from error conditions or interact with the user in the middle. It's often best to deal with any likely exceptions in procedural code when targetting your intermediate format, and do just the final rendering into HL7 in the XSLT. Rik Smithies 22:09, 7 Jun 2006 (BST)
You can get around using XSLT by using template engines, in particular if you want to fill in the blanks directly from your domain model and not from a intermediate XML format. The principle is the same, you just don't use <xsl:> tags but the ones provided by the template engine, which often supports OGNL syntax. I made good experience with Velocity (Java specific), that also supports all kind of control statements, loops, ifs, and extensions e.g. in order to fill in the current timestamp. And, you can use the same mechanism to create v2 messages, as its not restricted to XML in any way.Christian.ohr 05:46, 22 September 2006 (CDT)