To Negate in Vocab or by Attribute? - Email thread
19 Dec 2012
- The Pharmacy discussion of the Dispense resource for FHIR on Monday brought up a point of principle I’d like some thoughts on please …
The specific instance was the set of attributes around Drug Substitution. In many cases this is a positive assertion that substitution of one drug for another took place because “it is an approved substitution” ( there are lots of different reasons, we needn’t enumerate them here).
It is also possible to say that substitution did not take place when it would be reasonable to expect that it would. One body of opinion on the call wanted to express this as just another reason “Patient refused to accept substitution” while another view was to also include a negation indicator.
Given that consistency is a virtue:
- should we have a preferred way of handling negation, either by explicit use of a negationIndicator, or purely through vocabulary?
- and secondly if we can come to agreement on the first question should we make that the recommended FHIR way of doing things?
- I am very interested to see the outcome of this discussion due to my work on Allergies. Many code systems for Allergies include the concept of "not allergic to" (SNOMED as an example), so it's very easy to just leave everything to vocabulary.
Tom de Jong
- Then again, many other code systems for allergies (to my knowledge, all except SNOMED;-), do *not* include the concept of “not allergic to”. Without a way to negate the meaning of what I’m describing, there would be no way to specify the absence of an allergy in these code systems.
- I would say that if you want to directly preserve the knowledge of the negation in a persistent store, then having it the vocab requires the least mapping. :If it is in the attribute, then you need to do some processing to represent the negation in the destination database. However, if it is for presentation only, or as input to other software, then I would opine that whichever is easiest to implement would be preferred.
- Indeed, if one is not sure whether a terminology that includes negation will be used, then it needs to be in the information model. Otherwise, the model must assert that the terminology to be used must be one that supports negation, such as SNOMED. That, btw, is what we put Terminology Guidance into the HL7 vocab model for...
- SNOMED does NOT support negation in the true sense. This has been an ongoing problem with the DL that SCT uses.
- Actually, that is true, isn't it. We had a big presentation from Peter about implementing true negation.
- In FHIR we can express constraints much more clearly (effectively?) so putting in a constraint “use negation if and only if terminology does not support negation” would (might?) be possible. That would at least avoid a term from the terminology that implied negation (Patient refused to accept substitution from my original example) being combined with a negation indicator being set to true.
- Negating every possible assertion through the vocabulary would be intractable, and we get into the combinatorial explosion problem, and we might even come up with combinations that are not very useful. It would be useful for some commonly used concepts E.g. "patient was anicteric and afebrile" - one could decompose this using an information model (or an expression) with a negation qualifier, or create a single vocabulary concept for that. The task for the vocabulary then would be to reconcile the negated concept to mean the same as the asserted concept with a negation qualifier.
In any case, the medical information models should provide a way to assert or negate an observation. It would be good for the post-coordination expression grammars to support negation, and to provide a way to equate a negated concept to an expression consisting of a positive concept and a negation qualifier. SNOMED CT and HL7 TermInfo project within the HL7 Vocab WG provide some standard ways of expressions, and you might want to look into those. The challenge with these expressions would then be to find an EHR that supports storing an expression, since most EHRs just expect a single concept in specific slots and not an expression. I'm not a FHIR expert, so I can't answer that part of your question.
- As previously observed, negation is only PARTIALLY supported in SOME vocabularies. As one who is involved in developing software that must be used with a variety of different vocabularies because of international requirements, I find it extremely challenging to address negation inside the vocabulary some times, and outside of the vocabulary at other times. This is my biggest challenge with current TERMINFO guidelines for SNOMED CT. While they may work satisfactorily when SNOMED CT is used, as soon as I have to replace SNOMED CT with some other vocabulary, e.g., because of regional requirements, I have to have a different code path to address negation in that vocabulary.
I’d like to see HL7 provide a consistent way to use negation that is independent of, but could be integrated with the capabilities of different vocabularies.
- Unfortunately, FHIR isn't magic. It attempts to address many issues, but the disconnects and overlaps between terminology models and data models don't have an easy fix, particularly when FHIR must still exist in a world where different implementation environments will be required to use different terminologies. Negation is one extreme, but there are lots of places where qualifiers like severity, body site, route, strength, etc. can sometimes be expressed in the terminology or in the model.
The only guidance FHIR really provides on this point right now is "go with what's done most commonly and what's going to be easiest for the bulk of implementers to deal with unless that creates a significant risk for patient care"
So on the question of "did not substitute", how is it typically dealt with in v2? In NCPDP? In other broadly implemented systems? Also, for implementers who are using it in a different way (with vocabulary or with a model attribute), who is going to have an easier time mapping instances to their own internal representation? In some cases, it may not matter, in other cases it'll make a difference.
It is possible, even likely, that the answer to the above questions will differ for the same sort of problem in different model areas. As a result, consistency in FHIR handling of negation and other qualifiers is unlikely. Sometimes it'll be easier to use vocabulary, sometimes to use attributes, sometimes to create a model that treats both approaches as being in the 80% (though that third situation should be quite rare).
- FHIR doesn’t have to be magic! In the case under discussion in pharmacy we have a reasonably even split of opinion and practice so the 80% rule doesn’t give particularly strong guidance. What I’m seeking is some thought on best practice – software development is full of implementation patterns and sometimes anti-patterns (don’t ever do it this way) and this negation topic seems like a good candidate to look for possible best practice patterns. We’ve had a plea from Keith to have only one way to have to write the code, and several observations that “it depends on the terminology”, one observation that “providing negated forms of everything explodes the terminology” and another that terminologies that support post-coordination provide yet another mechanism. There was also an observation that “Peter told us what negation really means” not that I am any the wiser! I bet there are a few more pearls out there still that all put together would make a fairly decent set of patterns depending on what you expect to find in the 80% you are trying to build to.
- Hi Hugh,
My point was that I'm not sue there will necessarily be a clear pattern in this space, nor a situation that we can limit to "only one way". That said, any patterns we can find would be most welcome. We even have a space to capture them on the wiki :>
- To me this is less negation than it is refusal, it is not the absence of a condition, result etc, nor is it a flavour of null. As such just indicating 'not' would not be sufficiently informative. This seems very much like an additional concept to me.
- I would suggest an important clarification on the guidance you offered… the ease to implement qualifier should favor what is easiest for clients of the REST pattern, thus pushing difficult things to services. This recognizes that servers tend to be more hooked into the underlying data, and clients can sometimes be underpowered or resource constrained (mobile health). Not always true, but helpful to use when breaking down problematic cases.
The only guidance FHIR really provides on this point right now is "go with what's done most commonly and what's going to be easiest for the bulk of __client__ implementers to deal with unless that creates a significant risk for patient care"
- Hi Hugh,
I am hesitant on the position of constraint “use negation if and only if terminology does not support negation”.
This opens up to variations in implementation and hence resulting in confusions. Information recipients will have to look in different places to discern the information. The potential clinical safety risks are not something that can be taken lightly.
If it cannot be reliably done via the use of terminology, perhaps it should be done in the information model only.
- Agree with the edit John.
20 Dec 2012
Tom de Jong
- Hi Stephen,
If we would want to have one firm guideline for all situations where statements can be both asserted and ‘negated’, your approach would be the only option (since we know there will be code systems that do not support negation internally). The problem is that it would have some very tricky side effects in cases where you do want to use a code system that contains negated concepts. You would have to restrict the use of these code systems to only allow ‘positive’ statements in the value set. As much as I like consistency, I don’t think that guideline is going to be implementable.
I think it’s important to have a negation indicator (or something like it) in many FHIR resources, just like it’s a must-have in many V3 building blocks. We need it there because we cannot be dependent on the presence of a negated concept in the vocabulary. But if the code system that gets implemented does contain negated concepts, they should be used without setting the negation indicator (since this would mean double negation). Is that situation perfect? No! But I think it’s a fact of life, just like many other information elements may or may not be implied by vocabulary.
By the way, Hugh´s original question was triggered by a debate over a code system that is maintained by HL7 itself, so I think the real question is: when we have both options (include negated vocabulary concepts or have a separate negation indication), which method would be preferred?
The reason I am in favour of using a negation indicator in such cases, is that I like my code systems one-dimensional (it’s a matter of taste I guess;-). I like the codes to refer to the same (kind of) thing happening in real life. In the example Hugh referred to, some would refer to something happening that I didn’t expect, others (more than one) would refer to something NOT happening that I DID expect to happen. I think that might trigger completely different functionality in my software, so I’d like the distinction to be explicit. And I know, there are other cases where different codes trigger different functionality (like a status code), but in those cases this would be clear without inspecting the code system. Maybe that would be the base for a guideline then… Not the simplest guideline we could imagine, but then again, if our work was simple, it wouldn’t be fun…
3 Jan 2013
- Was a conclusion reached with this discussion?
For what it’s worth, I agree with Stephen that the info model should be the preferred mechanism for describing negation and other qualifiers. Not only does this help with the problems Keith describes with incomplete and inconsistent terminologies, but it also offers some other benefits:
- Relying on terminology for qualification may result in combinatorial explosion of concepts greatly bloating the terminology content and reducing its usability.
- It makes the data elements more explicit and atomic enhancing computability for decision support and analytics.
- It is simpler to compose complex expressions from atomic parts than it is to decompose complex expressions into atomic parts – again this aids in the computability of the data.
I realize FHIR is focused on data transport, but we should not constrain what the sender or recipient can do with the data once they have it.
(Note: Charlie Mead and Linda Bird are also working on the “isosemantic model” issue – I have included the on the thread)
- Hi Bo
> - Relying on terminology for qualification may result in
> combinatorial explosion of concepts greatly bloating the terminology content
> and reducing its usability.
> - It makes the data elements more explicit and atomic enhancing
> computability for decision support and analytics.
> - It is simpler to compose complex expressions from atomic parts than
> it is to decompose complex expressions into atomic parts – again this aids
> in the computability of the data.
Generally, I agree. But if you already express your negation in terminology, and you've put the processes in place to make this work, then these things are no longer true.
> I realize FHIR is focused on data transport, but we should not constrain
> what the sender or recipient can do with the data once they have it.
FHIR is not focused on data transport, it's focused on the exchange of data.
That's a subtle but important difference. In order to enable systems to successfully exchange data, we must make seme constraints on what they can do with it when they have it, and FHIR does do some.
We do necessarily make constraints on the use of elements and terminologies by the elements that we define, and we have endeavoured to consider the use of the terminology where we can when doing that.
- Thanks for the reply Grahame. I especially like the part where you say "Generally, I agree"!
However, in regards to well constructed pre-coordinated terminology, I don't expect that many organizations have "put the processes in place to make this work". I have not seen processes like these work in my day job, and from the discussion I suspect that others may have similar experiences. Even if the terminology approach was a viable and complete solution for qualifiers, I would still argue that an explicit and atomic data model handling negation and other qualifiers is superior for machine computability.
So how does a discussion like this achieve closure?
- hi Bo
I think that very few organsations have put any processes in place to make large terminologies work. (Depends on the meaning of "work").
But it's common to see small ad-hoc terminologies that include qualification and negation, with processes optimised for that.
I don't think this discussion *can* receive closure. :-(
- I think the best we can do is identify questions to ask and recommendations to make based on the answers to those questions which can then be taken up by each committee dealing with a situation where this issue comes into play (as part of the 80% or as part of a profile in an extension). I don't think we can make a one-size-fits-all declaration.
Tom de Jong
- Hi Bo,
I tried to lead us into a conclusion, but we definitely haven’t reached consensus yet. I sympathize with your (and Stephen’s) viewpoint, and your arguments make sense, but I think proponents of SNOMED would not agree with them. That’s why I suggested that we cannot make one method normative, but have to accommodate both. The trick is to describe guidelines on how things should work, depending on the (type of) code system.
This should be scheduled for resolution in Phoenix. Either vocab WG or MnM WG could take the lead (I’m hoping vocab has room in their agenda).
Vocab co-chairs, is it possible to schedule this? Perhaps some of the participants on this thread could be present at this session. I’ll certainly try to.
- No it can not ever be "closed" or agreed upon. It is one of those arguments like the one in SNOMED about the difference between a finding and a disorder.
We will always be able to negate either in the terminology, or in the model. Rather than discuss which side of the model the negation takes place in, it's better just to discuss how you can be explicit about where the negation is in any given instance, so it can be correctly interpreted. There are too many valid arguments for and against either approach. This is not an issue that can be decided and agreed upon.
This question comes up everywhere, forever. Different groups will do it different ways. There will never be a time when we all agree on one way.
- This is precisely the kind of issue that TermInfo is intended to deal with. I'm in agreement with Grahame and Peter that we likely cannot achieve complete "closure" on this, but we should be able to create reasonable guidelines and recommendations that will cover and can be followed to deal with most cases, and also hopefully can achieve a reasonable degree of transformability and semantic interoperability between several alternatives (in the spirit of what Tom is suggesting). I'm not suggesting that this is an easy task, but the existing (and now expired) TermInfo DSTU is helpful in several areas, and we intend to further expand upon and improve it. The approach that it takes currently with negation is both out-of-date (discusses only Act.negationInd) and probably somewhat simplistic and, not surprisingly, SNOMED CT-centric, as its primary recommendation when using the SNOMED CT terminology is to avoid the use of negationInd altogether (it does, however, allow for some alternatives).
As I mentioned, in TermInfo we intend to enhance the guidance on negation and on any and all of the identified areas of information model and terminology overlap - beginning with SNOMED CT and V3, but also ultimately including V2 and FHIR, as well as CDA (which is already covered to at least a large extent under V3). We also plan to deal with the LOINC terminology explicitly, in addition to SNOMED CT. It's a rather huge order overall, and as much as possible we want to start where the need is the most acute - I think that this current discussion corroborates that negation is undoubtedly one of those areas.
The TermInfo quarters in Phoenix are Tuesday Q2 and Q3, and it would probably make sense to begin this discussion with Vocab there (we can see what thoughts the other Vocab co-chairs have). Regardless of where and how many times this is discussed, the additional guidance that we create should become part of the TermInfo DSTU, which we hope we can bring to ballot within the next cycle (or two). Any work that can help to move things toward that goal will be much appreciated.
4 Jan 2013
- This topic illustrates one of our HL7 weaknesses – because it’s a difficult area and various solutions exist we concentrate on enumerating the difficulties and the possible solutions. By the time we have done that we have run out of energy and enthusiasm to try to provide someone new to the topic with any useful guidance through all the difficulties and the possible solutions. Thanks therefore to Bo for re-igniting this thread. Thanks also to Rob for making a positive suggestion about where and when to try and produce some actual guidance on the topic. I’ll try and get to the session where it is discussed – but please can we focus that discussion on how to provide guidance, drafting a wiki page sounds like the next step to me – not rehearse again why this is difficult?
- At risk of picking up the tail of a long discussion - I suggest that Models should be designed and specified with specific terminology bindings in mind (one or more), with a clear process for adding new bindings (if this s to be permitted). Adding a new terminology binding should require a check list of issues to be addressed (including negation and uncertainty/nulls).
A Generic terminology binding checklist could be produced for V3+SNOMED (based on TermInfo guidance) - and this could be refined as part of the ballot contents for new V3 message/document/service definitions if there are specific terminology binding issues that the committee or balloter become aware of.
The terminology binding conformance statements could then be written as human and machine readable rules and used as part of solution testing.
A good terminology binding specification would also include examples and other implementation guidance.
The terminology binding specification could be developed within SDO that maintains the structural definition (ie HL7), the one that maintains the terminology, or by a third party. In any event making it explicit that defining a terminology binding involves more than defining a group of valuesets and needs some clear governance and review would be useful.
In a similar way for CIMI, FHIR, and other modelling paradigms within and beyond HL7 - if the terminology is not determined in the structural specification, then appropriate terminology binding specifications need to be included at each level of the stack (foundations, domains, standards, and implementation guides).
Ease of use for implementers and testers of implementations would be the criteria.
If this adds nothing to the discussion then please ignore
- I think this does add value. The gist of it ought to go onto the wiki recommendations (or whatever) I just mentioned in my eMail.
- Like Charlie McC, I apologise for coming in late on this discussion, but I have been following it (not least because Himself at the desk opposite keeps mentioning it to me)…
I have written a short paper on this topic with Dipak Kalra; the paper was written for the semantic interoperability workgroup of a specific project and therefore was very much in the spirit of Peter’s comments, about finding what is best for a particular implementation.
In our case, we came down on the side of “the terminology”, and we listed the reasons why we did that, but of course that requires the terminology to have the requisite capabilities for the use case(s) in hand.
Michael van der Zel
- Reading through this thread I miss the variable "computable or human interpretable". I can imagine that it is important to use attributes when you do DSS. If the code includes the negation the DSS must be able to find out if negation is present or not. It is a gray area, as David Markwell describes it. You have model and terminology. And only together you can do DSS, or not?
On the practical side since not all terminologies support post coordination, I think attribute in the model is the best option. But on the longer run it should go in the terminology.
This is my 2c.
Is the paper available?
- I’ve copied Dipak into the thread, so I’m hoping he will say whether it’s shareable beyond the project it’s currently in…. ☺
- Hugh, et. al.
RE: HL7 (/FHIR) weaknesses, and cases such as this, where we can/will never identify one single, "standard" way of communicating information about some particular aspect of the data
I've often been criticized for suggesting that spending time and effort striving for a standard that can support interoperability in the absence of any "out of band communications" is (at best) not a productive use of our time, and I've been asked for valid use cases where "out of band communication" can not be avoided, and interoperability can not happen without at least some "local agreements" to determine exactly how information will be exchanged, interpreted and understood -- for example, via meetings and/or written and verbal conversations between the stakeholders.
Now! it seems to me that your solution to the problem that we're discussing -- to abandon trying to write a single prescription for handling negation, and simply provide guidance -- would in fact require that there be an "out of band" agreement between the interoperating parties as to how that guidance would be followed, and therefore could qualify as a valid use case for "out of band communication will be required" that we could all understand and relate to.
I believe that we may can conserve our "energy and enthusiasm" -- and get more done of what we actually *can* get done -- if we can just manage to avoid spending those limited and valuable resources trying to do things that are arguably not doable.
- Dear Julie,
Thanks for introducing the paper to this discussion group.
Please do share it - this learned group might help us improve it!
And, if it sparks some debate, that will also be useful.
- Thank you, Dipak
All – the paper is attached.
- Hi Thomas,
There are some aspects of FHIR that will not be nailed down in the core specification. Selection of terminologies for certain attributes is one of those. However, these can still be handled in a fully automated way through the use of profiles. So if your definition of "out of band" is "access to information not in the core specification", then yes, out-of-band communication is required. On the other hand, if your definition is "developer on the client side talking on the phone or by email to developer on the server side", then no, there's no need for out-of-band communication. Everything can be negotiated/understood at the system level without humans *needing* to talk to each other. (That doesn't mean that humans can't/won't sometimes choose to talk to each other.)
- Hi Charlie,
It adds, though it's easier said than done. Sometimes we can predict what terminologies might be used, but often they'll vary widely (think billing codes). I've added a couple of bullets to FHIR's list of "methodology guidance that we need to develop:
2.5.5 When should content (negation, uncertainty, null values) be handled in terminology vs. distinct attributes?
2.5.6 When terminology bindings aren't fixed in the core specification, how should a design accommodate the use of different terminologies that may require distinct modelling structures?
If anyone wants to start capturing guidance there, it'd be most welcome :>
If all that we can offer, and all that two parties will have is "guidance", best practices or advice, I don't see why you still want to insist that a Profile is going to solve the problem of negation. If that's really the case, then why not just generate those profiles and show us by example how they'll make the problem go away.
- Hi Thomas,
Profiles exist in specific contexts. If Canada creates a profile that selects a specific terminology, the profile can place constraints on the resource elements that reflect the use of that terminology, including how things like negation, uncertainty and null values are conveyed. We can't generate the profiles at the international level in most circumstances because the selection of terminology and modelling constraints is dependent on more localized requirements. But once you get down to the level of a particular interoperability problem space, profiles *can* be created, and once created are sufficient to define strict interoperability expectations without further negotiation between participants.
One question for clarification, please:
It sounds like I'm hearing you say that the "Negation problem" that's under discussion (and that many agreed was not cleanly resolvable) will not exist once FHIR and suitable Profiles become available -- is that correct?
Asking another way, are you saying that "no human will ever have to talk to any other human" in order to interoperate safely -- and that everything can be resolved automatically, dynamically, and programatically using FHIR and a Profile -- even when a mutual understanding of and agreement on the method of negation is of critical importance to, and an absolute requirement for, accurate communication?
If so, and unless I'm missing something big, then the issues around negation that we're finding so thorny would appear to be resolved.
- Hi Thomas,
The negation problem exists as much in FHIR as anywhere else. At the resource level, in some situations it'll be necessary for the model to support both a modelling and a terminology approach. The selection of exactly which approach gets used in an implementation will be pushed into profile space. In other situations, it may be possible to agree on an approach at the resource level (that would apply to all implementations, regardless of jurisdiction or domain).
I am not saying "no human will ever have to talk to any other human in order to interoperate safely". I'm saying that there will be environments that will use FHIR in such a manner that humans don't need to talk to each other to allow their systems to interoperate safely. There will be other environments where human-to-human discussion will still occur or even be necessary. In shorter terms: human-to-human discussion is not a de facto pre-requisite for interoperability using FHIR.
You've gone full circle and ended up just contradicting yourself.
Let me quote your first email:
There are some aspects of FHIR that will not be nailed down in the core specification. Selection of terminologies for certain attributes is one of those. However, these can still be handled in a fully automated way through the use of profiles. So if your definition of "out of band" is "access to information not in the core specification", then yes, out-of-band communication is required. On the other hand, if your definition is "developer on the client side talking on the phone or by email to developer on the server side", then no, there's no need for out-of-band communication. Everything can be negotiated/understood at the system level without humans *needing* to talk to each other.
And then your last email:
I am not saying "no human will ever have to talk to any other human in order to interoperate safely". I'm saying that there will be environments that will use FHIR in such a manner that humans don't need to talk to each other to allow their systems to interoperate safely. There will be other environments where human-to-human discussion will still occur or even be necessary.
So you start a debate by saying one thing ("Everything can be negotiated/understood at the system level without humans *needing* to talk to each other.") and end it saying the opposite ("There will be other environments where human-to-human discussion will still occur or even be necessary.")
My initial post was simply a response to Hugh's observation that providing guidance to implementors was a more realistic and achievable goal than trying to "standardize" on one negation mechanism. That scenario -- where the parties would use both an HL7 Standard and HL7's guidance for dealing with what's NOT fully addressed in the standard and agree on the details of handling negation based on their respective constraints and capabilities -- struck me as a reasonable use case for "interoperability that would require out of band communications and/or local agreements".
You'd asked me to provide this for you in the past, so I'm not sure why you could not just let it stand -- especially given the roundabout that your arguments against it led to.
5 Jan 2013
- Dear HL7 friends,
I hardly never ever interfere with your discussions though I read most of it.
But this time I want to say to TJL:
a. the purpose of a discussion is not to combine words and try to catch an opponent and
b. you obviously did not read Lloyd's response well enough and unfortunately made a wrong combination.
I understand exactly the meaning of Lloyd's words. Please read it twice in the future and think well before you react.
That is what I did now too with this unusual response. But it is more important for all of us to find the best routes to solutions.
- >> "I understand exactly the meaning of Lloyd's words."
Perhaps, Jos.. but based on your nonsensical conclusion that my goal was to "try to catch an opponent". you clearly do not understand mine.
I hardly consider Lloyd an "opponent", and hope that you are the only one who has that perspective.
7 Jan 2013
We currently have open quarters on Tues Q1 and Thurs Q2, but we do not have a room for Thurs. If this is still something folks want to do, please work with the vocab co-chairs to solidify the time and day.
Tom de Jong
- Hi Jim,
I understood from Rob Hausam’s mail that this would be handled in the TermInfo quarters on Tuesday Q2 or Q3. Any of the first three quarters on Tuesday would work for me, but I have a preference for Q2 or Q3. I was hoping the vocab co-chairs would schedule this and send out an invitation.
- I proposed that one of the Terminfo quarters would be a good place to have this discussion within Vocab, but it's not the only or necessarily an exclusive option. I haven't polled the other Vocab co-chairs yet and so far only Jim has commented. Jim has been taking the lead on the Vocab WGM agenda, but I'll check with the other co-chairs and we'll see if we can nail down a plan for this and let everyone know. If anyone else has specific conflicts or preferences about when you would like to see this discussed, please let me or any of the Vocab co-chairs know.
8 Jan 2013
- I’d really like to see this discussed as part of a Terminfo – the only session I could manage would be Q1 Tuesday.
Tom de Jong
- Tue Q1 would be fine for me too, but I don’t think that’s currently scheduled for TermInfo. Anyway, I’ll await what vocab co-chairs decide.
- This has gotten a bit confusing, so we will deal with it on the vocab call on Thursday. One of the co-chairs (Rob?) will get back to this group with the final plan.
Tom de Jong
- That’s fine. By the way, I have to add that Tue Q3 is no longer an option for me.
Is there any chance that you might be able to make it to Terminfo for a discussion in part of Q2? We'll will also look at Q1 - maybe we could consider that an "extension" of Terminfo?
- Not really – I could manage a few minutes perhaps, but I’m co-chair of a FHIR group having its first meeting. I think you just have to go ahead without me.
9 Jan 2013
Sounds like that's clearly not an option. But we will try to accommodate your schedule, if possible. It's looking to me like Tuesday Q1 may be the best choice, assuming that there are no other conflicts or considerations that I'm not presently aware of. If that works we can at least begin the discussion there, with your presence. And if it needs to carry on into the "official" TermInfo quarters or elsewhere, we can do that, as well. As Jim noted, we'll finalize this tomorrow on the Vocab call.
- That would be good – I think I’ve juggled the Pharmacy agenda so I will only have 1 conflict then! Thanks for being obliging.
14 Jan 2013
- Here is the plan for the Vocab/TermInfo discussion of this topic in Phoenix. The initial discussion will be in Vocab Q1 tomorrow (Tuesday), in the Palo Verde room. The first topic for the quarter is CVX OID's, and then we will pick up the negation discussion in the latter half. Since this is a TermInfo-related topic that likely will need considerably more than a half-quarter discussion, and not everyone interested is available in Q1, we will recap and plan to continue the discussion in TermInfo in Q2 (same room).
...[GTM details deleted]...
- I do realize I’m dropping in very late to this discussion, but my preference, like Tom’s, is to be very explicit for those scenario’s where we, at time of resource design, identify the need for negation and realize that misinterpretation is a risk to patient safety. Unlike in v3 (and that is why Terminfo is less applicable here), we can add specific attributes to a resource to account for negations, so there’s no need to employ a generic system like negationInd.
If I’d take Procedure as an example, instead of having a negation attribute, I’d model an explicit attribute “notDone”. Whether the terminology we use for the Procedure.code supports negation or not, the need for processing this attribute would be evident to the developer of both the sending and receiving application. Even if Procedure.code would include negation, it would clearly be a restatement, instead of a double negation.
- ….. , I’d model an explicit attribute “notDone”.
There is a need to express a least one more negation statement: “not known” (i.e. it is not known whether the/any procedure has been performed). This is distinctly different from asserting that the/any procedure is not done.
Tom de Jong
- Hi Stephen,
There might be a need for that (although I can already hear the 80% police in the background;-), but you are right that it is separate from ‘notDone’. Unless we want to consider a data element with a concept domain that has ‘done’, ‘possibly done’, ‘not done’, but that to me sounds far-fetched.
By the way, I like Ewout´s suggestion for ´notDone´ instead of the V3 negation indicator, since it would indeed prevent conflict with vocabulary that implies the same. That solution only works for FHIR though, and the debate was about representing negation in general, also within a V3 context.
- This is also a hot topic on MnM on Thu Q2, I brought in a request to get guidance on this and hopefully a consolidated solution for all of the HL7 standards :-)
As already mentioned we need something which works with what we have now and of course plans how to do this in the future.
16 Jan 2013
- I would strongly encourage you to look at the SNOMED concept model for Context for Clinical findings, Events, and Procedures. It is quite reasonable. It would be unfortunate to not take advantage of the existing experience. We need to be able to map into SNOMED context dependent expressions for interoperability and for reasoning.
Don't try and reinvent the wheel.
- One way to tell if two HL7 models, that have the negation expressed in different parts of the model, are "semantically equivalent" would be to normalize them to RDF triples. Then compare the two graphs and you'd see if they are the same.
Triples are Subject Predicate Object. So an example of two very similar but different negations, one on the predicate and one on the object look like this.
Patient123 does NOT have an associated condition of pneumococcal pneumonia.
Pateint123 does have an associated condition of something that is NOT pneumococcal pneumonia.
These clinically say a similar thing, but logically they are not the same. On the first the negation is on the predicate (associated condition) and on the second it is on the object (diagnosis).
I think this is the only way to normalize models and see if they "mean the same thing". I human clinician might just think that the two examples here are the same thing.
In the first, there may or may not be any associated condition. in the second, there is definitely an associated condition, but that condition is not Pneumococcal Pneumonia.
- If you decompose one of each example to triplets, they may mean the same thing "close enough for clinical work" but not exactly "semantically equal" in a logic sense. Here is the non negated statement, followed by the two ways of negating, followed by the strict logical interpretation.
1 This person is observed to have a diagnosis of pneumonia.
Now two ways to negate, one on the model attribute and one on the vocab, transformed to the logical assertions
2 This person is NOT observed to have a diagnosis of pneumonia
3 This person is observed to have a diagnosis of NOT pneumonia
Clinically these are close enough to conclude "this person doesn't have pneumonia" but logically they are slightly different.
IN 2 the person might have pneumonia that wasn't observed
in 3 they were observed to have something, but it was not pneumonia.
But the "intent" of the sender was to communicate to the receiver that "this person doesn't have pneumonia".
Since the machines are literal, these would not be "semantically equivalent". But they may have the same clinical intent of the two senders.
17 Jan 2013
I am concerned that asserting that someone has something that is NOT pneumonia could be a way of negation of pneumonia. You would also need to assert that they don't have any other disorder besides the NOT pneumonia, and even then you are not quite there.
Not saying that something present is quite different from saying something is not present!
Maybe you didn't look for pneumonia in the febrile patient with a cough because you diagnosed them with influenza. Even if they have influenza, they still may have a viral pneumonia or a secondary bacterial pneumonia. If you did some reasonable workup (perhaps a CXR along with exam and room air pulse oximetry) looking for pneumonia then you could make the second affirmative statement that pneumonia is not present.
413350009|Finding with explicit context|:
408729009|Finding context|=410516002|Known absent|,
408731000|Temporal context|=410584005|Current - specified|,
408732007|Subject relationship context|=410604004|Subject of record|
So, for affirming that something is known to not be present, right now we have a good way using SNOMED-CT Finding/Procedure/Event with explicit context to help us out. While there are some who will be left out in the rain if we forge ahead with SNOMED-CT, it is a pretty reasonable thing for most of us (80%?!?!) to do. It may be reasonable to look at the SNOMED-CT (and LOINC, and RxNorm/NDF-RT/other drug vocabularies e.g. WHO ATC) concept models and draw inspiration from it. I think the pattern works.
- In general, even though we KP use SNOMED, we avoid situation with Explicit context as it is the least standard part of SNOMED.
I was not writing about what a clinician wants the thing to mean. I was writing about what a reasoner looking at the triples literally will say it means.
What you really want to mean requires parentheses which are not part of the model.
This person does NOT have (an observation of pneumonia)
But with out the parens, and with the literal meaning of what the negation on the predicate or object of the triple means, it's as I've stated. Not what you want to say, but how it is interpreted if the predicate or role is "is observed to have" and the object of the triple is "a diagnosis of pneumonia".
I'm not recommending anyone use situation with explicit context i SNOMED.
It's a long conversation, and somewhat subjective, but I like to use the disorders and findings themselves.
Situation with explicit context opens up a can of worms.
- >We need to be able to map into SNOMED context dependent expressions for interoperability
I don’t know. How’s this iOS developer that worked on a Nike app last month and now has 4 weeks to connect to a PHR ever going to figure this out? Unless the valuesets are extensionally defined and he can just look through the enumerated codes, I have no confidence that you can get the meaning across. I’m deliberately taking a simplistic stand here, just to point out that although I see use in more complex use of codingsystems for analysis purposes, the people actually writing the software to communicate and most probably process and display this data in the first place aren’t terminology experts. How would we cater for them?
- How would the aerospace industry handle this guy wanting to write a glass cockpit / guidance app for a 767?!?
While we want to make it easier to have safe and effective use of the technology, we desperately need qualified developers, software engineers, etc. to be doing this type of work.
A PHR is not the same as a Twitter account. If you proposed developer is well versed clinically, a solid software engineer, has a good understanding of ontologically oriented terminology systems, and standards-based information models, I think he will be fine without the net. However, without the needed knowledge and skills, then they probably should probably look into doing something else.
I have a list of things that I would love to have someone develop for the iPad--how about a decent tool to create value sets with visual navigation of a graph of concepts and the ability to switch between terminologies easily? A nice RDBMS which can do SQL and import text files? A proper UML class editor that can export in some recognizable XMI format (and supports templates, enums, stereotypes, and profiles.*)
We may be able to lower the bar in the not-so-distant future when we have mature tool kits (at least have made it out of beta), but right now we need to figure out the right way to do things completely, consistently, unambiguously, without vagueness with explicit context.
Then we can write the specifications that can extract and transform the needed information from the model-of-meaning into the much easier to deal with model-of-use. We cannot hamstring efforts needed for the more challenging use cases because those without the experience and training want in. We can accommodate them, but there is a risk to doing this prematurely.
- Why would you can a bunch of worms in the first place?
18 Jan 2013
- Same reason you can anything. Because someone will buy them.
- There are many issues and many solutions hereяя The outcome of the discussion of this topic in Vocab / Terminfo this week was to get this topic documented and lay out the best resolution for each of a number of solutions.я The hope is that designers will then be able to make best use of this advice to provide specific advice for a specific design.я Implementers then should have something appropriate to work from.
eMail threads like this are all well and good, but all the wisdom being poured out is losяявя“ we have to get better at capturing it and making it available to implementers.
- >> "eMail threads like this are all well and good, but all the wisdom being poured out is lost "
Please see my comment on Keith's recent blog posting RE this very issue:
I think that this "information leak" is a huge problem for HL7 -- maybe even bigger than anyone who frequents these list discussions can ever fully appreciate.
- >> "ow would the aerospace industry handle this guy wanting to write a glass cockpit / guidance app for a 767?!?"
I don't think that the demand for developers of aircraft guidance software will ever approach even a fraction of a percent of the world-wide demand for HCIT developers, so this is not a fair comparison -- and IMHO does not reduce the correctness of Ewout's position that HL7 standards need to be usable by and accessible to any good, competent developer if they are going to be as widely adopted as we'd like them to be.
- Very good point, Hugh. I have an action item to create a place for continuing documenting of these issues and solutions on the TermInfo Wiki page. I will try to get that done soon (maybe today?) and will include the email thread, as well as the relevant guidance from the existing TermInfo DSTU document. We will need to summarize and distill the recommendations from there, but it will serve as a starting place for the further discussion and documentation.