This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

Context Conduction in RIMBAA Applications

From HL7Wiki
Revision as of 11:51, 18 January 2009 by Rene spronk (talk | contribs)
Jump to navigation Jump to search

Summary

  • HL7 v3 contains a definition for Context Conduction. Particulary for RIMBAA implementations the support for this methodology is crucial - querying part of an object tree without knowing what conducts where could lead to real problems. For example: if you want to pull a subset out of a object hierarchy - you'll have to first determine the conducted context to the subset, before actually creating the subset.
  • This page seeks to discuss ways in which people have implemented this, and to document best practices.

Analysis

There seem to be two options when one received a serialized v3 model in a RIMBAA context:

  1. denormalize the conduction, i.e. replicate the conducted context (participations, act relationships) where appropiriate. The RIMBAA application (except when importing/exporting the data) is agnostic as to the meaning of context conduction.
    • If you don't import data that often, but execute a lot of queries on the data, it may be better (in terms of performance) to deal with it before persisting the data.
  2. deal with the conductivity at 'run-time', i.e. whenever a query is made determine what conducts and include that in the response.
    • If one imports lots of data, but doesn't execute a lot of queries, then it may be better to deal with it whenever the query is executed (at the price of having to do additional joins).
  • What are the advantages/drawbacks of these two options? Who has chosen to use what option?
  • There are problems with the current definition of context conduction. What are its feasblae parts, what parts are broken (based on RIMBAA implemenetation experiences, not in theory)?

Java SIG Reference implementation

Peter Hendler: I added some context conduction to the javaSIG API.

Example: In a CDA you have the "subject of record" way at the top of the document. Then you go down Component StructuredBody Component Section Entry and finally you get to ClinicalStatement. By the time you get there in a CDA, you are far away from the "Author" and "Subject of Record". But because of context conduction it is assumed that since you didn't over ride these from the very top of the graph, they still hold. When you literally lay these down into a RIMBAA database. Then in your Act Table you have an Observation. But to find the patient who that Observation is for, you must do backward joins all the way back to the ClinicalDocument Act, and then from there you have to join RecordTarget to get to PatientRole where the Patient.ID lives.

Context conduction copies this "context" information and puts it in the optional "subject" and "author" ActRelationships that are directly connected to the Observation in the ClinicalStatement. The result is if you do this in memory before you lay the graph down into the database tables (called option 1 in the above discussion), then when you query you can do far fewer joins. What we found is that the time it takes to do this in memory before laying the graph down in the database takes as much time as it does to do all of the joins. So it's pay up front (option 1) or pay as you go (option 2). Gunther thought he could do a more efficient context conduction algorithm than the one I did that wouldn't take so long. But he's found that it's really not so bad to do the joins. Once you have the SQL or HQL it can be a stored or named query and it's no big deal.

Discussion