This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

March 15, 2011 CBCC Conference Call

From HL7Wiki
Jump to navigation Jump to search

Meeting Information



  1. (05 min) Roll Call, Approve Minutes & Accept Agenda
  2. (40 min) Presentation Draft (link to be added) - Serafina
  3. (5 min) SHIPPs Project - Mary Ann (note: Per Ioana - We have until April 1st to submit these comments
  4. (15 min) Confidentiality codes - topics for discussion in May 2011 meeting; preliminary discussion (need to structure) - Mary Ann, Jon Farmer

Meeting Minutes

1. Roll Call, Approve Minutes March 8, 2011 CBCC Conference Call Accept Agenda

2.Presentation Draft - Serafina [Ioana also has slides that she may want to circulate to CBCC]

Question: Analysis of Quality Measures was this document sent out; they were specific to the documents that I have been reviewing (Mary Ann). Richard is looking for highlights of those documents so that they could potentially go into public comment. Wanting to illustrate any patterns.

[Serafina] – has a presentation for our internal group to help everyone understand the approach how e went about the individual eMeasure comments that Mary Ann circulated already. Presentation is a work in progress—I’d like feedback to improve on it. Looking though that could tell you how you could decide how you want to review the individual comments that Mary Ann uncovered in analysis of the behavior health that have been put out there for comment.

[Richard] – I’m assuming that there are layers of an onion here…we have to be able to tell a very simple top layer story about why we did this--what is necessary to make real. During the conversation we’ve had some strong criticism

[Serafina]- That’s one of the reasons I would like to go through this presentation...that shows the approach. Some of the comments we made are pretty strong and I might temper them as well, but I’d like to get feedback from the group. We can do this quickly.

[Richard] – It would be useful if we can have something to circulate in the near future. I’m not sure what’s going to happen---2013? Stage II things are already cooked, I guess there might be some flexibility I would love to make a case, that we could pursue a couple more measures that make the comments more meaningful. If I’m target here I’m looking for ammunition. This effort is in SAHMSA’s interest and we need to compare the quality measures and have something meaningful.

[Mary Ann] – I can show a handful of comments that I sent to show you the ‘’high level easier one’’ (referring to comments of the NQF)

[Serafina] – essentially what I was trying to do with this presentation, is to talk about and get some background on the objectives of the tooling of the eMeasure, as well as then the approach we’ve taken in analyzing the eMeasure with the intent to try to come up with an information model to support these eMeasure. This was also the approach made to respond to the comments on the eMeasure as well.

[Ioana] looking at the objectives of the whole eMeasure project which was to improve the clarity and accuracy the way measures are computed and perhaps eliminate the manual process which can be very time consuming. With that premise in mind, we’ve looked though the measures--looked at ‘the before’ and ‘the after’, to see what was gained, what was lost, what could be done better, is that a fair summary? You can put a note in the slide of what the original objective of the project. We are trying to add an additional level of clarity to this effort and perhaps in the future become integrated with it.

[Serafina] yes, that’s a fair summary. I’ll write up a quick slide to explain it as the original objectives of the projects. I talked about the project of the retooling for eMeasure. SHIPPs are a sub-project of analyzing.

[Serafina] the retooling project converted the existing eNQF endorsed measures into an electronic measure format. Their intent (from slide) was to preserve the original eMeasure intent because they didn’t want to go through any substantive changes; that required re-evaluation of evidence and intent. It was created to have more specificity by the context of the data used to report on each measure.

[Ioana] A comment serafina had made in a previous meeting---One of the problems was some of the high rate of invalid reporting because too much interpretation, - not enough specificity in the current eMeasure. Automation is another subtask of it. Today’s reality. We aren’t looking at a patient looking over multiple care areas most of the measures included on the endorsed website are mostly process measures; there are some outcome measures. They also include some structural patient composite measures which have specific definitions on the website. The focus today is on ‘’’process measures’’’ NOT outcome measures. Outcome measures are another kettle of fish. i.e. in the behavior health arena, we have seen 7 process measures and 1 assessment measure. (Self-assessment measure) (Example provided from NQF website) an example of an outcome measure is ‘’hypertensive blood pressure under control’’ which is considered an outcome measure, with the data found in our systems we can theoretically provide these outcome measure—within the context of the agency/group doing the reporting

In the NQF materials there is a statement that is made ‘’that the next generation of measures will stand healthcare settings have a more complete measure of care. That’s what we’ve interpreting as outcomes measures. We want to see what the patient-centric focus when they have care from numerous groups. That’s what we should be measuring--there is a lot of infrastructure that is required to support that.

[Ioana] this next generation assumes that we all adopt EHR systems.

[Serafina] it assumes it and enables this next generation... I tried to reiterate some of Richard's questions--what happens when a patient is seen across multiple provider organizations? Where does the patient get counted? Right now the retooled measures are organization and provider centric- not patient centric. I’ve captured comments that we should capture in our SHIPPs model. We may want to include an individual provider or information from system network provider.

[Mary Ann] when we say organization we are talking about a single organization not across organization

[Serafina] the point is reporting is done by either individual providers or healthcare organizations like---entire organization-based or all of a particular payer organizations. Structure is dependent on what comprises that structure. It’s not across organizations it’s an organization. If an organization has child elements to it and they can bundle their reporting together-- then it’s still considered only ‘one’ organization.

[Ioana] it’s an important distinction Organization based - provider based and sometimes more providers could be subsumed as a rubric by a single organization--- a payer would be reporting information from multiple provides; the point is that the focus is not on the patient or patient experience or the patient healthcare quality but the payer centric organization.

[Serafina] When payer organization doing that reporting--if the provider is working with more than one provider group for instance their data may be reported in multiple areas. It’s the organization of where the data is coming from—it’s who the owner of the data is

[Richard] You’re right to emphasize the patient record you’re talking about. We should focus on what is happening with/to the patient, but we still want to evaluate the provider. In the IHE space, we want to look at the patient services (all the services the patient gets) and look to see if there was any coordination of care.

[Serafina] – we might be talking about a team of providers; certainly with where we are trying to go with healthcare reform. We want to look at the umbrella of where patients might be expecting to receive care.

[Jon] There is a term in statistics called Population Coherency – that’s where these current measures fail to do. We’d like to have statistics on all kind of things, but when you have multiple measurements in the same geographical area—but there is no guarantee that those measurements collectively cover that population, then denominators are all suspect.

[Richard] Or they overlap or duplicate or overlap in other ways. Then we have lots of data but no information.

[Mary Ann] – what Serafina is trying to describe is the reality of today.

[Ioana] – that’s the challenge if SAMHSA is trying to be championing this; this is the problem space and where we start and then here’s where we want to go.

[Serafina] – (currently hyperlinks are having problems) but if you click on #113 eMeasure it will take you to the page where you can do it yourself. We downloaded these measures and describing 4 files; then the other thing that is important to do, is that there is a guide on how to deal with these e-mails that is a real critical document to understand. We’ve got this group of eMeasure in these 4 major files. We compared what was in there to the same measure name. (see hyperlink) to what NQF endorsed. that will take you to the link to where you can view the details of the material from the site. Everyone who wants to look at this should register its free (and anyone can register). You can get to the actual full details of the endorsed standards. Now going through some examples (lot of work to be done) I’ve got an example of an endorsed measure; this is what’s currently being used—narrative NQF endorsed Measure.

Under NQF measures. (slide) This is the original narrative endorsed measure we are looking here. This is the NQF endorsed measure if you can NQF004 – in the NQf measure; (logged in); took a cut and paste snipped of the entire narrative description of what the measure was. This is the current state; now we go into the future state; during our analysis what we found is that you have to read this guide. We had viewed documents early on; before 113 were posted don the sites. (these were not the full eMeasure, just a representation) its significantly changed from the original; there are three versions of the measures; the original narrative version, there is a version that have been review that had a quality data set associated with and some processing logic extracted from the narrative sets and now the version were are reviewing are the intermediary which is also different representation.

[Serafina] - before this latest version--zip file for reading, we just looked at the pseudo code—there was no explanation of why we needed it; this guide steps through the process of how to read it.

There have been a significant change in the format; in addition they’ve included some instruction of how to decode some of the measures. It’s more understandable, but you do need the information close at hand in order to interpret it correctly. This is the reason why I’ve tried to explain why it’s so important to read this guide for reading. The guide explains the ‘and’ or ‘not’ operators, it gives them very specific data element interrelationship cues which are not self-explanatory. They’ve also provided the pseudo code and a narrative that describes what pseudo code means- which is very helpful to ensure you are interpreting the pseudo code correctly. (and consistently)

[Ioana] – Right now, the pseudo cannot stand alone per your estimation. Would the average programmer be able to interpret the code correctly? They’ll still have to transcribe it in their language of choice

[Jon F] – the spec would be unambiguous if you get three out of five interpretations. [serafina] In the whole NQf measures as it existed—that would certainly result in 3 out of 5 interpretations.

[Ioana] – the eMeasure makes things better and doesn’t make things unambiguous.

[Serafina] The NQF005 diabetic foot care – used as an example; this will become relative to the behavior health measure; this is one example from the actual eMeasure for when it’s implemented. There are 4 files, there is an HTML version (which is a cut and pasted from the HTML version); this is the HTML version that is included in it, giving a narrative description up above. It then goes into all the pseudo code and describes the patient population, denominator, the numerator, if there are any exclusions and then the data criteria (QDS data elements) which are included in the spreadsheets being shown (code list spreadsheets); this is from the endorsed standard from the current standard being used—what I’m showing you next is a snippet form of the entire electronic eMeasure

Their statements of intent for the retooling was to not make an substantive changes in the measure and that’s one of the reasons why the data elements is whether the comments you are making are substantive are; when we say we’ve reviewed the measures---we ONLY reviewed the 8-9 behavior health measures; but this is the kind of example that take note in the retooled version a new criteria is being introduced.

[Richard] – in regard to the ambiguity about the different measures are they informed of this new document changes?

[Mary Ann] – Serafina is showing you the overview; but I will go over this.

[Ioana] We have made a deliberate attempt to compare these measures to their endorsed measures to make sure they didn’t change—but if they changed they know they have changed

[Serafina] – the QDF data measure information is coming from the retooled spreadsheets (some of the columns have been hidden); the primary columns needed are shown. We discovered that In the interim and final artifacts the standard category is critical but there is a qualifier for the standard category called the QDS data types and originally in our artifacts that we were given to work with. The QDF data type here qualifies the context of the diagnosis; note that this column has not been replicated in the final version of the eMeasure retooled spreadsheet.

[Ioana] For instance, the standard in the QDS data type gives you additional information. the standard category is the Diagnosis_Condition/Problem. In the QDS type this is the Active_Diagnosis; so it gives you a state of the where the diagnosis you are looking at. So procedure performed rather than a generic concept. If you look at other cases, i.e. QDS_DataType says birth ‘time’ it tells you which characteristic about the person you are looking at. The standard would say ‘Person’ but the QDS_data_type it gives you an idea of how the value set is supposed to be applied. It’s almost like giving you the additional attribute to understand though the value set is supposed to be applied. This value set applies to these active problems or procedures.

[Serafina] This is the Quality data asset model; also another document which accompanies this whole process here it says one standard –diagnosis-condition- problem this is the standard category element that is reflected in the spreadsheet; , here unfortunately the other condition diagnosis problem. There is a discrepancy. Then the next thing that happens is it gets qualified. The context of this category of this information there is DiagnosiveActive; DiagnosisFamilyHistory; DiagnosisInactive; DiagnosisResolve. In this interim version of the spreadsheet they’ve included this—this is critical information of how you’re pulling that data. Showing all the columns in the latest spreadsheet—note that they’ve removed the QDN data type. Does that mean it’s not longer needed? It’s probably something that they removed it without understanding the value of it.

I will be circulating this and would love input—is it too much data? not enough comment provided?, etc. The process for submitted the comment if you say general comment, unfortunately it’s a general comment only on a particular piece of the measure; there is no way to submit a general comment on the document. The process to submit comments is very tedious. Ioana will take an action item to contact Floyd to see if there is an alternative to submitting comments. The process is not optimal for submitting multiple comments.

3. SHIPPs Project - Mary Ann

[Mary Ann] all the information that serafina has shown you, I’ve used that to submit my comments (placed into word document for submission to be placed on the website) there are 45 comments made. Note that there are many discrepancies between the NQF endorsed and the most current versions (and sometimes in the pseudo code)

[Richard] by shifting the focus to the payer, the payer was going to use their data to evaluate the provider. There in the new retooled data we are not going to use the same data. The data may not be the same data coming from your EHR. If I’m directly reporting data from my EHR, the payer may have information from several other providers. I’m talking about knowing or not knowing about data

[Jon] – We have to talk about consistent definitions, what their semantics are. It’s necessary but not always sufficient to solve. There is accuracy and completeness and then there are common definitions (both of which are necessary and not sufficient) which shouldn’t say what’s important—they’re both important.

[Ioana] What is apparent there are certain constructs in certain measures which are not called out... there are some parameters which would be helpful if those constructs are called out. In the retooled measures, you are referring to they have expanded the value sets to allow to be computed from clinical records

[Richard] But they may have also reduced the amount of information available, they don’t have the source of the information—but the payer may have that information. The source of the information drives the validity of these measures.

[Jon] The idea of factoring out some of these components of the numerator and denominator is absolutely crucial if this is ever going to get this under control—is to factor out those components out and their parameters, otherwise we will not be able to get our brains around this to make it consistent.

[Ioana] yes, We have to word this very carefully so that all will get what they mean.

Remaining discussion has been tabled in the interest of time.

4. Confidentiality codes - Topics for discussion in May 2011 meeting;

Preliminary discussion about confidentiality codes as structure/process around what it is we are going to do (more of a process, rather than the discussion) one of the concerns with the current HL7 Confidentiality Code (value set?) is that there might be might be multiple concept dimensions packed in that one code. That’s a perceived weakness of the current set of codes.

In terms of structure—what would we need to discuss prior to May? i.e. One might be a draft statements of what are the things that need to be sorted out.

(See e-mail from Jon)

In the current set of confidentiality codes, there may be multiple dimensions, what are the dimensions? If we have a draft set of dimensions for each of those dimensions what are those value sets? How exactly do we split out the concepts that are currently confused?

We can propose two steps:

1. List the dimensions that need to split out from each other

2. Draft the value sets for each of those dimensions.

E-mail from Jon Farmer (received 3/15/2011 @ 12N PST) further detailing the above discussion

It may be helpful to break the value set into about four- so that implementers are more likely to implement all aspects or can at least meaningfully claim which ones they implement

1. “Levels (low, normal, restricted, and sensitive)

2. Sensitive context category” type or “rationale” (PSY, HIVE, SDV, VIP, Taboo)—Segments

3. “Role” (clinicians only) – RBAC roles?

4. “Policy mode” (individuals having treatment relationship to patient, as actor) –this is a “policy model”

Meeting adjourned at 12:10 PST

Back to CBCC Main Page