This wiki has undergone a migration to Confluence found Here

RIMBAA 201003 Minutes

From HL7Wiki
Revision as of 12:42, 15 May 2010 by Rene spronk (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Workgroup Date/Time Location Chair/Scribe
co-hosted by the Dutch RIMBAA SIG (HL7 the Netherlands)
2010-11-03, 10:00-17:00 Amsterdam, the Netherlands C/S: Rene Spronk

Attendees (marked X)

At Name Affiliation Email Address
  Adri Burggraaff Slotervaartziekenhuis, NL
X Andrea Ceiner ItalTBS, IT
  Andy Harris National Institute of Health Research (NIHR), UK  
  Arild Hollas CSAM Health, NO
X Bas van Poppel iSoft, NL
  Bertil Reppen Apertura, NO  
X Davide Magni ItalTBS, IT
X Ewout Kramer Furore, NL
  Freek Geerdink Vrumun, NL
X Hans Jonkers Philips, NL
  Henk Enting MGRID, NL  
  Kjetil Sanders CSAM Health, NO
X Michael van der Zel UMCG and Results4care, NL
X Rene Spronk Ringholm, NL
  Roelof Middeljans UMCG, NL  
X Tessa van Steijn Nictiz, NL
  Tom de Jong NovaPro, NL  
  Tommy Kristiansen CSAM Health, NO
X Willem Dijkstra MGRID, NL
X Yeb Havinga MGRID, NL


  1. Meeting called to order by Rene at 10:15
  2. Administrative
    • Agenda Review/Additions/Changes
      • MOTION Agenda (as published) approved by general consensus.
    • Approval of the Minutes of the previous meeting in Phoenix (see, which includes presentations made during the meetings)
      • MOTION to approve the minutes of the Phoenix WGM - accepted by general consensus.
    • Announcements
      • Ringholm bv, a commercial provider of training courses and consulting related to HL7/IHE/DICOM in Europe, is the sponsor of the meeting venue this time.
      • Apologies received from Adri, Andy, Bertil and Henk - they expressed the intent to attend future meetings.
      • Wally, a Wallaby hand puppet, is making a tour of worldwide HL7 meetings to promote the WGM in Sydney in January 2011. A picture has to be taken to show where he's been. See attachment for the picture taken during this meeting.
      • Between June 16 and 18 a cross-industry code generation conference will be held in Cambridge, UK. See for details. There must be a lot we can learn from cross-industry conferences like these, it is unlikely that many of the implementation issues faced by v3 implementers are unique to healthcare. Something like the Journal of Object Technology (an open publication about objects and components) could also be a cross-industry 'source of inspiration' for us.
    • Planning of next meeting
      • Rio in May, September, London in November
      • Rene suggests to hold the September meeting in Italy. Andrea and Davide are in the process of setting up an Italian RIMBAA group; we should be supportive of their efforts.
      • MOTION to organize the September out-of-cycle meeting in Italy. (Ewout/Andrea, 9-0-0)
      • ACTION ITEM: For Andrea to select a date and venue for the meeting. Probably Bologna.
    • Scope of RIMBAA
      • Andrea: the current scope of RIMBAA is (officially) limited to the P* and O* columns of the technology matrix (i.e. the use of RIM based models within the application) - interoperability is out of scope. Rene: effectively S* column is within scope, but by focusing on those v3 implementations that use RIM based models internally (O and P), we're focusing on the most advanced, largest and complex implementations of v3. It'll be easier to learn the best practices from such implementations, than from implementations that solely want to use RIM-based models in the context of interoperability. Typically those applications use DOM/SAX/XPath and other classic XML techniques – there’s not a lot we could learn from them that’s not covered by the larger implementations as well. We could declare S* to be in scope, but with the added remark that it will not be the primary focus of RIMBAA.
      • ACTION Item: review scope statement during next RIMBAA meeting.
  3. Highlights from prior RIMBAA meetings on other continents (Rene, max. 30 minutes)
    • Product presentations
      • Mohawk HI-Everest, a generic MIF-based code generator for .net
        • Rene presents, and video: This is a presentation originally made by Duane Bender during the HL7 WGM in Phoenix (January 2010).
        • Ewout: at Nijmegen hospital we tried to use the tool with a MIF file for a universal artifact. It didn't work because of MIF-version issues. They did however look at the toolset and the quality of the generated code (using a MIF shipped with the toolset): quality of generated code looked very good; the tool is very well integrated into Visual Studio. The HL7 v3 data types library (which is part of the toolset) looks to be very useful even as a standalone library.
        • In general the attendees lamented the fact that all tools seem to support slightly different versions of MIF, and that conversions and manual tweaks are a necessity to get them to work. Stability of MIFs, or at least a collection of working transformations are urgently needed, this is effectively a hurdle to tool use and adoption.
        • ACTION ITEM: Rene to ask Duane Bender (Mohawk) about MIF-version support in the MARC H Everest toolset. Should ideally support MIFs from latest Normative Edition.
      • Use of a DB2 PureXML database (Axolotl, IBM RIMon). (See ORM best practices)
        • Rene briefly summarizes their experiences. Given the requirement to have consistent XML element names to do XQuery based queries, one probably has the choice to either support one single CIM (e.g. CDA only), or the RIM ITS.
    • RIM ITS and Context Conduction: during the Phoenix WGM there were significant changes/updates to these issues. See below for separate agenda item.
    • Ewout: in Phoenix for the first time RIMBAA WG was considered (amongst others by the HL7 chair, Bob Dolin) to be a group of importance, to show that v3 is implementable. Increase in influence was also seen in RIMBAA involvement in the RIM ITS and Context Conduction discussions. Without RIMBAA these issues wouldn't have been on the agenda at the Phoenix WGM.
    • Michael: DCMs were a major topic as well; there's a relationship with Model Driven Development. Most models that RIMBAA discusses are 'technical artifacts'. Need DCM as a model created by domain experts. We need to bridge the gap with the domain experts. Andrea: to me, storyboards are also useful artifacts in this area. Ewout: DSL was also a topic in Phoenix, DCMs and DAMs are effectively DSLs. There were lots of discussions about using simplified models instead of classic Visio/R-MIM models. Yeb: DCMs are not technical in nature - the purpose of a template isn't interoperability, it's [validation/]processing.
      • Discussion of DCMs/DAMs/Storyboards and MDD is worthy of further discussion during a future meeting.
  4. RIMBAA: SAEAF vs RIMBAA (Michael van der Zel). Michaels presentation can be found here:
    • Michael presents the way in which the RIMBAA fits within the HL7 Enterprise Architecture Framework (now called SAIF instead of SAEAF). Michael is the RIMBAA liaison to the ArB.
    • SAIF is to be a concrete application of IEEE1471 / ODP-RM. SAIF is about making specifications consistent, traceable and testable, use consistent terminology. SAIF does not create specifications.
    • The attendees have a discussion about interoperability levels (see sheet #10 in the presentation). We agree RIMBAA cannot fulfill all that is needed therein. For higher levels of interoperability we need government and national support programs.
    • Can SAIF help close the gap between static models and actual working dynamic systems? We would like to get more ideas about what SAIF is and what it wants us to do. Can somebody from SAIF come over to one of our meetings and explain?
    • In summary: as a group we encourage Michael to work with ArB to determine if their work can somehow contribute to the RIMBAA work. We're not 100% sure it does contribute, or will contribute, but it gets the benefit of the doubt. We'll plan to have regular, but brief, updates on this at future RIMBAA meetings. Michael is motivated to drive this forward, so we look forward to his updates.
  5. Update of new developments/changes in core v3 modeling (Ewout Kramer).
    • Context Conduction. See for Ewouts presentation.
      • Ewout presents the current and future way in which context conduction is supported. The methodology has been radically changed during the recent meeting in Phoenix.
      • Ewout: fundamentally, when we look at the model there are inherent things that are properties of an Act, which we expect to conduct to child acts. Properties of the act that we expect to conduct are its attributes, but also participations and even some act relationships. So that's the tricky part, to understand that some act relationships are properties that conduct elsewhere, and that other act relationships are conductors, i.e. relationships between container and child acts.
      • Ewout: the old way of context conduction, as described in Charlie Bishops whitepaper, and in the documentation of the RIM (see the documentation of context related attributes in act relationship and participation) is based on a definition in each act association whether is will/shall conduct elsewhere in the overall model. When changing the context conduction settings on an individual act association one has to keep in mind the effects this will have on all other acts in the model - this is tricky. Add the support for overriding/adding of context, and its gets even trickier. Context can conduct 'down' the graph, but could also conduct 'upwards' - and possibly subsequently 'down' again. This all is relatively hard to grasp.
      • Ewout: the newly proposed context conduction can be explained in one single powerpoint slide. Things that we would normally see as 'properties' automatically conduct (additive). Conduction of specific types of act associations can be switched of using special new attributes in an act association (one attribute for act relationship types, one of act relationship types).
      • Davide: we've been ignoring context conduction up to now - the new approach looks much more implementable. Ewout: in our implementation we had a (default, hardcoded) rule in the software application itself related to the conduction of Authors from parents to children. Davide: yes, we did that as well. The new methodology effectively documents what we've been doing all along.
      • Ewout: I don't know the latest formal status of the proposal.. Rene: as far as I'm aware it has been documented as a harmonization proposal, which means it will likely be approved during the upcoming harmonization meeting (mid March), and show up in the next RIM ballot - RIM ballots are held once a year.
    • RIM ITS overview. The RIM ITS provides an alternative way of expressing RIM-based-object-trees in XML. The encoding is based on the RS-cell of the technology matrix and is well suited for the exchange of data between RIM based applications. Ewouts presentation can be found here:
      • Ewout: The RIM ITS is a new way of encoding object graphs in XML. Grahame Grieve made a motion during the ITS WG meeting in Phoenix; there was a sizable presence of the RIMBAA WG to vote in favor of it. Ewout presents the core elements of the new ITs using an example snippet (using XML ITS, clone class names used as XML element names), and the same snippet (using the RIM ITS, with names of core RIM class names as XML element names). The XML ITS has the advantage that XSDs can be used for validation. One needs MIF-support to map clone names to classCode – and one needs MIF-support to distinguish between the playing and scoping entity (this isn’t obvious from the XML instance). On the other hand, the RIM ITS (which only has 1 single schema) really requires Schematron or another mechanism to perform validation beyond schema validation. Ewout: I expected to see the names of the 50 specialized classes in the XML, but it’s only the core RIM class names that are used. xsi:type is used to identify the name of the specialized class. Human readability of RIM ITS instances is lower than XML ITS instances. Disambiguation is harder, because it’s not driven by clone names. Grahame expects templateId to play an important role in interpretation of the models received. Ewout: we’ll need a MIF to schematron converter, or some other way to code generate validation code. The RIM ITS is up for ballot next cycle.
      • ACTION Item: Ewout to research the option of generating Schematron from MIF, to be used for instances encoded according to the RIM ITS.
  6. MDD - Model Driven Development
    • (Rene Spronk) Introductory presentation: The presentation introduces terms like MDD and DSL.
    • (Davide Magni) How does PHI Technology apply MDD? PHI Technology is the most advanced MDD toolsuite (based on RIM based information models) we know of. ITAL TBS (the developers of PHI) have gained ample experience in applying generic MDD principles in healthcare. See for Davides presentation.
      • Davide: An overview of the fulll PHI Technology toolsuite has been presented before. PHI is a toolsuite that uses“PHI designer” for model development, and a runtime environment. What ties it together are RIM-based models. The topic for todays presentation: how Ital TBS apply MDD in the PHI toolsuite, andhow code is generated from the design phase. PHI is a process driven toolsuite: one first designs the process (using JBoss, JBPM, expressed as a proprietary XML format); subsequently one binds R-MIMs to the process steps; one designs user interface (Forms, what parts of R-MIM to show to users); one binds Forms to Process steps; one binds Forms to R-MIM (UI design editor, bind elements to R-MIM element using an XPath-like expression) => all of these steps results in a set of models that specifies the actual application (PHI refers to the generated application as the “solution”). Subsequently one generates code that is executable by the runtime environment. The process design and UI design tools have a graphical interface, the resulting artefacts are expressed as XML for use by the runtime environment. The models serve as the true source for the application: the same models will be used even if one uses different technologies/platforms for the runtime environment, e.g. one could use another user interface package. That way the solution generated is not dependent on technology. Technology independence was a prime design objective.
      • Andrea: The main advantage of MDD is not having to focus on source code, but to focus on the internal XML representation of models. That way we have the best return-on-investment (ROI) possible, because of the long lived natures of the models. You could throw away the current tools, and use other ones, an all the while models will still be usable. We don't have to generate the application from scratch. We could change physical data models if needed. A secondary advantage of MDD is not for the software programmers but for domain experts – MDD empowers the domain experts. It radically changes the way software is produced. The bulk of the work is done at the customer site, and not by isolated programmers in a cubicle.
  7. MIF based code generation
    • (Hans Jonkers) Introduction to MIF. Hans's presentation can be found here:
      • Hans presents some of the basic characteristics of MIF. He has only used those parts of the MIF he needed to build his application, a small XML based database, based on clinical statement model. (see agenda item 6 of these RIMBAA minutes for a presentation of the application). He has created a meta object model based on the MIF schema.
      • His presentation today is based on his own experience with MIF; it answers the basic MIF-related questions.
      • There is a serious lack of MIF documentation, understanding MIF requires lots of reverse engineering by looking at examples. Code generation based on MIF schema non-trivial, schemas as they are cause problems in most tools.
      • Hans will share a MIF viewing tool (for educational purposes) with the group. The tool is a 'class viewer' and has (hardcoded) the MIF models as they were in August of 2009 (MIF v2.1.4). The tool can be found here:, it is an excellent starting point in coming to grips with the format.
    • (Rene Spronk) Overview and discussion of this draft paper: MIF based code generation.
      • By now the RIMBAA WG has received feedback from a number of MIF based code generation approaches, which allow us to create an initial description of some of the best practices.
      • There was ample discussion about the fact that one has to custom create a v3 data types library (a) the data types MIF can't be used for code generation, and (b) the data types specification -as is- simply isn't complete, (Yeb:) it lacks algorithms.
      • Yeb: the lack of algorithms is the main reason for us why we had to create lots of custom code. We have to make all sorts of assumptions - things not explicitly covered by the specification. Hans: it's an axiomatic specification, which is not intended to be executable. As implementers we have to make choices and implementation decisions. Ewout: code to support data types at least needs support for 'comparison', otherwise it's useless. Yeb: if there are no operators, then one can't validate constraints. The data carried in a data type is only a small part of it.
      • ACTION ITEM: Rene - convey message from RIMBAA to MnM that the data types spec needs elaboration when it comes to operators/algorithms - as-is the spec is not sufficient for implementers.
      • ACTION ITEM: Rene - find out if (next to the .net data types library from the Everest toolkit) there is a Java implementation of the v3 data types (from JavaSIG materials, or OHT project) which can be made available as a starting point for implementers.
  8. Discussion of RIMBAA Issues
    • Due to lack of time a few issues (Object nets and object trees, Safe querying of a RIM-based data model) were briefly introduced by were briefly introduced by Rene – without significant discussion. Discussion will continue on the e-mail lists and during upcoming meetings.
      • Andrea: one of the main questions from end-users that needs to be answered is: how do I query a RIMBAA database? Their application will be able to deal with standard queries, but all end-users/customers have a need to do queries not covered by their standard application. Rene: so we need to show how the approach would work, i.e. a kind of "Crystal Report" approach for a RIMBAA database. Michael: And what about research queries? How do we get data from a RIMBAA database to e.g. SPSS.
      • Rene: I think this is a new issue, one that we could try and deal with once the "Safe querying of a RIM-based data model"-issue has been dealt with. We’ll create another Extracting data from a RIM-based object store issue (which will be a mostly empty page for now until the safe querying issue has been resolved).
  9. MOTION to adjourn at 17:05 (Ewout/Bas, 7-0-0).

Appendix: summary of motions

The table below captures all substantial motions.

MOTION to approve the minutes of the Phoenix WGM - accepted by general consensus
MOTION to organize the September 2010 out-of-cycle meeting in Italy. (Ewout/Andrea, 9-0-0)

Appendix: summary of new action items

The table below summarizes all new action items. See RIMBAA Action Items for a a full list of open action items.

New action items
ACTION ITEM: For Andrea to select a date and venue for the meeting. Probably Bologna.
ACTION Item: review scope statement during next RIMBAA meeting
ACTION ITEM: Rene to ask Duane Bender (Mohawk) about MIF-version support in the MARC H Everest toolset. Should ideally support MIFs from latest Normative Edition.
ACTION Item: Ewout to research the option of generating Schematron from MIF, to be used for instances encoded according to the RIM ITS.
ACTION ITEM: Rene - convey message from RIMBAA to MnM that the data types spec needs elaboration when it comes to operators/algorithms - as-is the spec is not sufficient for implementers.
ACTION ITEM: Rene - find out if (next to the .net data types library from the Everest toolkit) there is a Java implementation of the v3 data types (from JavaSIG materials, or OHT project) which can be made available as a starting point for implementers.

Appendix: Wally