This wiki has undergone a migration to Confluence found Here

Context Conduction in RIMBAA Applications

From HL7Wiki
Jump to navigation Jump to search

Summary

  • HL7 v3 contains a definition for Context Conduction. Particulary for RIMBAA implementations the support for this methodology is crucial - querying part of an object tree without knowing what conducts where could lead to real problems. For example: if you want to pull a subset out of a object hierarchy - you'll have to first determine the conducted context to the subset, before actually creating the subset.
  • This page seeks to discuss ways in which people have implemented this, and to document best practices.

Analysis

There seem to be two options when one received a serialized v3 model in a RIMBAA context:

  1. on import: denormalize the conduction, i.e. replicate the conducted context (participations, act relationships) where appropiriate. The RIMBAA application (except when importing/exporting the data) is agnostic as to the meaning of context conduction.
    • If you don't import data that often, but execute a lot of queries on the data, it may be better (in terms of performance) to deal with it before persisting the data.
  2. at query time: deal with the conductivity at 'run-time', i.e. whenever a query is made determine what conducts and include that in the response.
    • If one imports lots of data, but doesn't execute a lot of queries, then it may be better to deal with it whenever the query is executed (at the price of having to do additional joins).
  • What are the advantages/drawbacks of these two options? Who has chosen to use what option?
  • There are problems with the current definition of context conduction. What are its feasable parts, what parts are broken (based on RIMBAA implemenetation experiences, not in theory)?

Discussion

  • Upon import is the safer option. If one receives lots of 'overlapping object structures' it may not always be able to reconstruct the 'correct' context at query time.
  • Does one need 'historic snapshots' at query time, or a set of business objects? That also impacts the choice.
  • June 2009, at the request of Lee Coller (Oracle) MnM discussed Context Conduction, which resulted in the following action items:
    1. We should document current contextConduction in Core Principles, as noted in April meeting (based on Charlie Bishop document) and also seek a list of ActRelationship types where context conduction appears most needed/important/sensitive/relevant. (Also need clear examples here.)
    2. Then, we use the "How to Query Acts" document that has been requested, but never developed, as a vehicle to understand the TRUE requirements for Context Conduction.

Java SIG Reference implementation

Peter Hendler: I added some context conduction to the javaSIG API.

Example: In a CDA you have the "subject of record" way at the top of the document. Then you go down Component StructuredBody Component Section Entry and finally you get to ClinicalStatement. By the time you get there in a CDA, you are far away from the "Author" and "Subject of Record". But because of context conduction it is assumed that since you didn't over ride these from the very top of the graph, they still hold. When you literally lay these down into a RIMBAA database. Then in your Act Table you have an Observation. But to find the patient who that Observation is for, you must do backward joins all the way back to the ClinicalDocument Act, and then from there you have to join RecordTarget to get to PatientRole where the Patient.ID lives.

Context conduction copies this "context" information and puts it in the optional "subject" and "author" ActRelationships that are directly connected to the Observation in the ClinicalStatement. The result is if you do this in memory before you lay the graph down into the database tables (called option 1 in the above discussion), then when you query you can do far fewer joins. What we found is that the time it takes to do this in memory before laying the graph down in the database takes as much time as it does to do all of the joins. So it's pay up front (option 1) or pay as you go (option 2). Gunther thought he could do a more efficient context conduction algorithm than the one I did that wouldn't take so long. But he's found that it's really not so bad to do the joins. Once you have the SQL or HQL it can be a stored or named query and it's no big deal.

It's been a long time but I think we conducted everything (=full support for all elements of context conduction as defined in v3). We collected Context as we went down and we pasted context where it was missing. The application doesn't track whether or not an association is acquired by context conduction.