Difference between revisions of "December 14, 2010 CBCC Conference Call"
Finaversaggi (talk | contribs) |
|||
(26 intermediate revisions by 2 users not shown) | |||
Line 44: | Line 44: | ||
ACTION ITEM: Re: ''CDA R2 Implementation Guide for Consent Directives'' | ACTION ITEM: Re: ''CDA R2 Implementation Guide for Consent Directives'' | ||
*Fill out DSTU Publication request document for and send to Lynn in order to have it added to the TSC agenda (Ioana) | *Fill out DSTU Publication request document for and send to Lynn in order to have it added to the TSC agenda (Ioana) | ||
− | *Prep | + | *Prep document for posting on the DSTU Comments site. Ioana to provide Suzanne with a post-reconciliation Word file along with any of the other files that were updated (Ioana) |
Action item: Close loop regarding outreach with MITRE Corporation (Serafina) | Action item: Close loop regarding outreach with MITRE Corporation (Serafina) | ||
− | Action item: Follow up with Don Lloyd, confirm publishing of SHIPPS (Mary Ann) | + | Action item: Follow up with Don Lloyd, confirm publishing of SHIPPS (Mary Ann begin_of_the_skype_highlighting end_of_the_skype_highlighting) |
Action item: Follow up with EHR co-Chairs namely John Ritter – with respect to the Records Management and Evidentiary Support project (Richard) | Action item: Follow up with EHR co-Chairs namely John Ritter – with respect to the Records Management and Evidentiary Support project (Richard) | ||
Line 61: | Line 61: | ||
− | |||
− | + | [[Community-Based_Collaborative_Care|Back to CBCC Main Page]] | |
+ | |||
+ | == CDA R2 Implementation Guide for Consent Directives == | ||
*Submit to TSC for DSTU publication; Ioana sent an e-mail to Don Lloyd to confirm that we could submit the Consent Directive CDA Implmemetaiton Guide DSTU content. (We have all the approvals and have passed ballot, finished the content updates and ballot reconciliation) | *Submit to TSC for DSTU publication; Ioana sent an e-mail to Don Lloyd to confirm that we could submit the Consent Directive CDA Implmemetaiton Guide DSTU content. (We have all the approvals and have passed ballot, finished the content updates and ballot reconciliation) | ||
Line 70: | Line 71: | ||
*Link to the site: [http://www.hl7.org/permalink/?DSTUTemplate DSTU Publication Request form] | *Link to the site: [http://www.hl7.org/permalink/?DSTUTemplate DSTU Publication Request form] | ||
− | == Outreach == | + | == SHIPPS Project Outreach == |
December 9th '''Structure Documents Working Group Meeting''' – a number of CBCC people attended. During the first hour the eMeasure’s project was presented and SDWG declared one of their next steps is to create PSS for eMeasures | December 9th '''Structure Documents Working Group Meeting''' – a number of CBCC people attended. During the first hour the eMeasure’s project was presented and SDWG declared one of their next steps is to create PSS for eMeasures | ||
Line 76: | Line 77: | ||
Serafina: Once SD heard about SHIPPS they thought it had a direct relationship and considerable overlap so they were quite interested. Aside from some comments to add clarification about the deliverables and the intent of the project e.g. we are creating a DAM that we are potentially providing suggestions to the EHRS functional model, only minor changes were required to the PSS. | Serafina: Once SD heard about SHIPPS they thought it had a direct relationship and considerable overlap so they were quite interested. Aside from some comments to add clarification about the deliverables and the intent of the project e.g. we are creating a DAM that we are potentially providing suggestions to the EHRS functional model, only minor changes were required to the PSS. | ||
− | + | ||
− | + | Changes to SHIPPS scope statement sent back to SD to ensure we captured their comments correctly examples: | |
− | + | *Need to clarify the definition of the term data quality | |
− | + | *Need to clearly express in the PSS that we are going to identify meta data and minimum data quality requirements, which means we are talking about the readiness of EHR data to be used for automated quality measures e.g. structured encoded data | |
− | + | *Indicate what the deliverables will include | |
− | + | *SDWG was invited to send any concerns or word smiting they may want to see in the SHIPPS PSS | |
+ | *It was determined there is an excellent opportunity to collaborate with SDWG | ||
Richard: There is a direct relationship with the '''President’s Council of Advisors on Science and Technology (PCAST)''' Report that was released December 2010. This report is trying to do and what we need done e.g. we are focused on granularity and they are focused on data element access service and to be able to share critical data for all sorts of reasons | Richard: There is a direct relationship with the '''President’s Council of Advisors on Science and Technology (PCAST)''' Report that was released December 2010. This report is trying to do and what we need done e.g. we are focused on granularity and they are focused on data element access service and to be able to share critical data for all sorts of reasons | ||
Line 106: | Line 108: | ||
Ioana: Yes it is dangerous e.g. discovering meta data for the purposes of a Google search so you can find information later on. Do we want to rework the data at all. We need to be aware of the new discussions going on at ONC and PCAST so as our project progresses we will understand what they are envisioning | Ioana: Yes it is dangerous e.g. discovering meta data for the purposes of a Google search so you can find information later on. Do we want to rework the data at all. We need to be aware of the new discussions going on at ONC and PCAST so as our project progresses we will understand what they are envisioning | ||
− | + | Serafina: Is there an update in SHIPPS PSS to reflect that statement | |
Ioana: We don’t want to spend too much time reworking the PSS. We are identifying a moving target | Ioana: We don’t want to spend too much time reworking the PSS. We are identifying a moving target | ||
Line 119: | Line 121: | ||
(please note that for clarity I’ve included sections from the emails) | (please note that for clarity I’ve included sections from the emails) | ||
− | Bernd: ''I'd like to comment the project description in the way of expression my surprise about the objective to define a DAM. At least from the perspective of an architectural approach, there is the Security and the Privacy Domain, the domains of medical specialties, the Administrative Domain, etc., thereby being aware that domain can be separated into subdomains or aggregated to superdomains according to the definition of the system in question. This was also the reason for combining the Security DAM and the Privacy DAM to the Composite Security and Privacy DAM. Usually, a domain can be distinguished from the other one by a different ontology (the domain ontology). This usually doesn't mean that any project would lead to a new DAM, as a new scenario is represented by application ontology, but not by another domain ontology. In that context, I'd like to refer to Domain dimension of the Generic Component Model'' | + | Bernd's email: ''I'd like to comment the project description in the way of expression my surprise about the objective to define a DAM. At least from the perspective of an architectural approach, there is the Security and the Privacy Domain, the domains of medical specialties, the Administrative Domain, etc., thereby being aware that domain can be separated into subdomains or aggregated to superdomains according to the definition of the system in question. This was also the reason for combining the Security DAM and the Privacy DAM to the Composite Security and Privacy DAM. Usually, a domain can be distinguished from the other one by a different ontology (the domain ontology). This usually doesn't mean that any project would lead to a new DAM, as a new scenario is represented by application ontology, but not by another domain ontology. In that context, I'd like to refer to Domain dimension of the Generic Component Model'' |
Ioana’s email response: ''I believe that Fig. 1 in the project scope statement illustrates the contents of the domain analysis; we are not planning to rework the contents of the privacy policy but to describe the subset of data that would be extracted from an EHRS in order to support privacy and quality measures. We are also looking at the level of maturity of a EHRS that would support the needs of the enterprise in terms of privacy and real-time quality reporting. We are planning to rely on the previously defined DAMs to derive information collection and reporting requirements ''http://gforge.hl7.org/gf/download/docmanfileversion/6062/7902/HL7ProjectScopeStatementSemanticHealthInformationPerformanceandPrivacyStandard20101214.docx | Ioana’s email response: ''I believe that Fig. 1 in the project scope statement illustrates the contents of the domain analysis; we are not planning to rework the contents of the privacy policy but to describe the subset of data that would be extracted from an EHRS in order to support privacy and quality measures. We are also looking at the level of maturity of a EHRS that would support the needs of the enterprise in terms of privacy and real-time quality reporting. We are planning to rely on the previously defined DAMs to derive information collection and reporting requirements ''http://gforge.hl7.org/gf/download/docmanfileversion/6062/7902/HL7ProjectScopeStatementSemanticHealthInformationPerformanceandPrivacyStandard20101214.docx | ||
+ | |||
Discussion on emails: | Discussion on emails: | ||
*Ioana: His question is; are we building another layer on the DAM. Ioana clarified that '''SHIPPS is using the existing DAM to qualify the maturity of the data the structure and encoding and the need for high quality EHR data to do the data segmentation. In this regard we are relying on what we already learned from having done a DAM and we are layering on top of this. What we are doing net new is we are examining those quality measure statements from HEDIS and NQF and SAMHSA and we are looking at what information precisely is required to automate the evaluation of these measures''' | *Ioana: His question is; are we building another layer on the DAM. Ioana clarified that '''SHIPPS is using the existing DAM to qualify the maturity of the data the structure and encoding and the need for high quality EHR data to do the data segmentation. In this regard we are relying on what we already learned from having done a DAM and we are layering on top of this. What we are doing net new is we are examining those quality measure statements from HEDIS and NQF and SAMHSA and we are looking at what information precisely is required to automate the evaluation of these measures''' | ||
Line 129: | Line 132: | ||
*Ioana: Basically we have the concepts related to standards and where possible – describing this to be mature canonical data applicable to automated data segmentation - anything that falls short of that will have issues e.g. you may have to do some mapping or buy data language processor because the data is not ‘out of the box capable’. If you are not at the highest level of maturity the risk is you may share too much information or not enough. This was made pretty clear on the scope statement | *Ioana: Basically we have the concepts related to standards and where possible – describing this to be mature canonical data applicable to automated data segmentation - anything that falls short of that will have issues e.g. you may have to do some mapping or buy data language processor because the data is not ‘out of the box capable’. If you are not at the highest level of maturity the risk is you may share too much information or not enough. This was made pretty clear on the scope statement | ||
− | == Outreach Summary == | + | == SHIPPS Project Outreach Summary == |
*Serafina is to close loop with outreach to MITRE Corporation | *Serafina is to close loop with outreach to MITRE Corporation | ||
*Mary Ann to confirm publishing of SHIPPS with Don Lloyd e.g. expected publishing date ensuring all timelines are met e.g. TSC approval and NIB timeline | *Mary Ann to confirm publishing of SHIPPS with Don Lloyd e.g. expected publishing date ensuring all timelines are met e.g. TSC approval and NIB timeline | ||
− | *Richard: we need to talk to EHR working group, to | + | *Richard: we need to talk to EHR working group, to Crystal Kallem involved in the Public Health Reporting |
*Serafina: Many of her comments did question if we were including research and Public Health. Several follow ups and as I didn’t hear back I took PHER of our list as and interested party for the PSS | *Serafina: Many of her comments did question if we were including research and Public Health. Several follow ups and as I didn’t hear back I took PHER of our list as and interested party for the PSS | ||
Line 142: | Line 145: | ||
*Richard to speak with EHR co-Chairs on Friday namely John Ritter – with respect to the Records Management and Evidentiary Support project. He will ask again, mentioned that Jim Kretz was participating with project and is possible liaison for the project | *Richard to speak with EHR co-Chairs on Friday namely John Ritter – with respect to the Records Management and Evidentiary Support project. He will ask again, mentioned that Jim Kretz was participating with project and is possible liaison for the project | ||
− | |||
− | |||
− | + | == Collecting Data == | |
+ | (This discussion is in relationship to the SHIPPS Project) | ||
+ | |||
+ | We are eventually going to define granularity for the clinical data we are collecting and applying SNOMED | ||
+ | |||
+ | Ioana: Yes or in some cases ‘you are collecting a lot more information than we care about’. The information we care about in certain instance would be described in a data profile for a certain function. In other words you may carry a lot of information in your application but you need to make sure you carry this information for particular function if you want it implemented | ||
+ | |||
+ | Richard: detail clinical model | ||
+ | |||
+ | Ioana: the question is would we focus on depth of information (granularity) first or breath of information | ||
+ | |||
+ | Serafina: I would hope it would be depth first, but I am not sure | ||
+ | |||
+ | Ioana: Either way we could go through multiple iterations | ||
+ | |||
+ | Richard: we are all going to have to do that, if we are talking about privacy ONC is interested in being able to able to demonstrate protecting sensitive behavioral information – so we are talking about collecting that information in various settings not just specialty care. At SAMHSA we are coming up with minimum data set ideas. Now we have to marry privacy concerns with that when that information is shared | ||
+ | |||
+ | Jon: By minimum data set are you talking about granularity statement or breadth | ||
+ | |||
+ | Richard: It could be both, think about LTC, what information do you need for nursing care | ||
+ | |||
+ | Jon: I think we should be doing both not one before the other e.g. sampling of depth to understand what issues lurk and then also do breath to illustrate the ability of the infrastructure generality | ||
+ | |||
+ | Richard: In the context of what the ONC is talking about it could be a combination of a Beacon community and the VA and my '''ptsd''' information being treated in the community vs the VA hospital. This is a high priority if my '''ptsd''' is known and the consequence to my career in military. This is of interest to the US realm | ||
+ | |||
+ | Jon: If there is a domain of interest that is considered low hanging fruit and is beneficial with ROI this is what we do first. This is how standards are shaken out by people having the right incentives and ROI | ||
+ | |||
+ | Richard: If we dig into clinical ontology e.g. what pieces of information exist the EHR in a standards based way e.g. we can look at how these entities can be grouped in a hierarchy and subdivided according to similarities and differences. By using an ontology approach we don’t have to get into everybody’s legacy system. We don’t need any legacy system; we just work with standards ontology | ||
+ | |||
+ | Jon: Both (depth / breath) are canonical no legacy translation, central standards e.g. everybody is using the same standard. If we demonstrate that the ontology of that standard is rich enough then people will be motivated to map to it | ||
+ | |||
+ | Richard: There is nothing else out there now | ||
+ | |||
+ | Jon: Standard wise, people will be motivated to map local concepts | ||
+ | |||
+ | == EHR Functional Profiles == | ||
+ | (This discussion is in relationship to the SHIPPS Project) | ||
+ | |||
+ | There is content missing in the functional profiles – we need to come up with data profiles for the EHR. They have functional profile of EHR functional model constraints – EHR functional model talks about functions and HL7 talks about data and content | ||
+ | |||
+ | *Ioana: We are talking about the data requirements based on a minimum set of business requirements based on quality measures | ||
+ | |||
+ | *Serafina: '''Talking about quality measures from data that is captured in EHR, we are not talking about capturing data separately for the purpose of quality this is supposed to flow directly through the workflow of the normal documentation process that is required to substantiate and to document the delivery of care'''. And when you talk about that, while there may be difference across organization across jurisdiction etc. there are policies that govern what comprises the documentation | ||
+ | |||
+ | *Serafina: Firm relationship –Records Management working group - what they are talking about is promoting their functional profile that is inherent in the EHR functional model. | ||
+ | |||
+ | *Ioana: The requirement for record management is represented by their functional profile | ||
+ | |||
+ | *Richard: we want every piece of information that you collect for quality to have clinical use | ||
− | * | + | *Ioana: or directly reused, for clinical decision support or for documentation then you take that same data and focus in on that which is required by quality measures but it is an automated process |
− | * | + | *Richard: Through ontology or modeling you need to identify what you need to ensure quality data |
− | + | == Reality Check == | |
+ | (This discussion is in relationship to the SHIPPS Project) | ||
− | + | Serafina: In the yesteryears when data was submitted to quality measures people would look at the data manually e.g. what is in the record and then abstract the data out, they would not come up with new data. So what we are talking about is having the data structured and encoded in a manner so it can be pulled in an automated fashion. The data is in the record to begin with by the virtue of the normal workflow of documenting the delivery of care | |
− | + | Pat: What I heard from previous comments we want to identify the quality data point or data elements we won’t be successful that way. We need to work off the known clinical data set, and identify what we can gleam from it for quality measures | |
− | + | Ioana: It is the same data, but it the DAM we are identifying the requirements, you need this much data in order to be able to reuse the clinical data | |
− | + | Pat: If we come up with a slew of quality data elements that have little clinical benefit we will be lucky to get any accurate data if we get any data at all. Quality metrics must be derived from information collected because of its clinical relevance | |
− | + | Ioana: the data to compute the quality measure is clinical data e.g. the quality measure itself will say how many people receive this diagnosis for this medication within 30 days. All the data that you use to compute this statement is clinical data. The challenge is to declare generic criteria, what are the criteria and algorithms for measurement | |
− | + | Richard: Using a diagnosis from billing the problem is we are using cruddy data to come up with quality measures. We need to get the data from an EHR not billing record e.g. as soon as people know they are being measured on something they will fix their systems accordingly | |
− | + | Richard: Real or imagined consequence will play out for how we use quality measures; we need to create a whole new Industry dynamic where people are concerned about how they look and also we need to have a system dynamic enough so the measures can’t be gamed all the time | |
− | + | Serafina: As measures change or become more refined and if organizations can’t reflect these measures and the data is out of sync for reporting e.g. HEDIS reporting changes from one year to the next; there is a level of scrutiny that is applied to the data. This will have to be factored into this whole approach. People are changing their systems and how they are reporting. Every organization wants to achieve as high quality as possible | |
− | + | Richard: we want to move to outcome measures as soon as we can and so we are not just talking about process measures | |
− | |||
− | + | Meeting was adjourned at 3:05 PM Eastern |
Latest revision as of 19:14, 4 January 2011
Contents
- 1 Community-Based Collaborative Care Working Group Meeting
- 1.1 Meeting Information
- 1.2 Attendees
- 1.3 Agenda
- 1.4 CDA R2 Implementation Guide for Consent Directives
- 1.5 SHIPPS Project Outreach
- 1.6 SHIPPS Co-Sponsorship – emails exchanges between Bernd Blobel and Ioana Singureanu
- 1.7 SHIPPS Project Outreach Summary
- 1.8 Collecting Data
- 1.9 EHR Functional Profiles
- 1.10 Reality Check
Community-Based Collaborative Care Working Group Meeting
Meeting Information
Attendees
- Ed Coyne
- Mike Davis Security Co-chair
- Jon Farmer
- Suzanne Gonzales-Webb CBCC Co-chair
- Michelle Johnston
- Mary Ann Juurlink scribe
- Jim Kretz
- Milan Petkovic
- Diana Proud-Madruga
- Pat Pyette
- Richard Thoreson CBCC Co-chair
- Ioana Singureanu
- Cliff Thompson
- Ken Salyards
- Serafina Versaggi
- Craig Winter
Agenda
- Roll call, approve minutes December 7th, call for additional agenda items & accept agenda
SHIPPS Project Status
- December 9 Structured Documents Work Group Meeting - SDWG signs on as co-sponsor following minor adjustments to the Project Scope Statement
- Project Insight - milestones added to project plan
Upcoming Holiday Schedule
- December 21, 2010 - Informal Meeting
- December 28, 2010 - NO Meeting
- January 4, 2011 - Next Official Meeting
1. Action Items
ACTION ITEM: Re: CDA R2 Implementation Guide for Consent Directives
- Fill out DSTU Publication request document for and send to Lynn in order to have it added to the TSC agenda (Ioana)
- Prep document for posting on the DSTU Comments site. Ioana to provide Suzanne with a post-reconciliation Word file along with any of the other files that were updated (Ioana)
Action item: Close loop regarding outreach with MITRE Corporation (Serafina)
Action item: Follow up with Don Lloyd, confirm publishing of SHIPPS (Mary Ann begin_of_the_skype_highlighting end_of_the_skype_highlighting)
Action item: Follow up with EHR co-Chairs namely John Ritter – with respect to the Records Management and Evidentiary Support project (Richard)
Action Item: Touch base with Austin regarding voting period – policy to wrap it up by a certain period (Mary Ann)
Action Item: Confirm Jan 23 new project scope deadline (Mary Ann)
2. Resolution
3. Updates/Discussion
CDA R2 Implementation Guide for Consent Directives
- Submit to TSC for DSTU publication; Ioana sent an e-mail to Don Lloyd to confirm that we could submit the Consent Directive CDA Implmemetaiton Guide DSTU content. (We have all the approvals and have passed ballot, finished the content updates and ballot reconciliation)
Response received during meeting (from Don):
- The only thing I can't confirm is that you've submitted the document for approval as a DSTU to the TSC.
- Link to the site: DSTU Publication Request form
SHIPPS Project Outreach
December 9th Structure Documents Working Group Meeting – a number of CBCC people attended. During the first hour the eMeasure’s project was presented and SDWG declared one of their next steps is to create PSS for eMeasures
Richard presented the SHIPPS PSS to the SDWG
Serafina: Once SD heard about SHIPPS they thought it had a direct relationship and considerable overlap so they were quite interested. Aside from some comments to add clarification about the deliverables and the intent of the project e.g. we are creating a DAM that we are potentially providing suggestions to the EHRS functional model, only minor changes were required to the PSS.
Changes to SHIPPS scope statement sent back to SD to ensure we captured their comments correctly examples:
- Need to clarify the definition of the term data quality
- Need to clearly express in the PSS that we are going to identify meta data and minimum data quality requirements, which means we are talking about the readiness of EHR data to be used for automated quality measures e.g. structured encoded data
- Indicate what the deliverables will include
- SDWG was invited to send any concerns or word smiting they may want to see in the SHIPPS PSS
- It was determined there is an excellent opportunity to collaborate with SDWG
Richard: There is a direct relationship with the President’s Council of Advisors on Science and Technology (PCAST) Report that was released December 2010. This report is trying to do and what we need done e.g. we are focused on granularity and they are focused on data element access service and to be able to share critical data for all sorts of reasons
Ioana: Not sure if we were planning to look at the ONC request for comment on their prioritization framework and their priorities. One of their high level priorities is to look at quality measures, they are looking for the precise definition of what data is required and how it is used for quality measurement. Very much in tune with what we are proposing here e.g. to identify what data out of your EHR you need to fire up the automatic response
Richard: We need to understand what the EHR has to do to produce the kind of tagging and meta data that goes along with temperature for example
Ioana: What we care about is the relationship between the different objects – what medication was given in the encounter
Serafina: It is the context that is important
It was determined that we have ONC covered in that we have a close working relationship with them. We are interested in the segmentability or pieces of information in the record and describe the standards based way of doing it e.g. structured encoded data
Ioana: I just skimmed the PCAST Report, one thing they talk about it the atomic information vs record based document centric information. We covered this in structured document what kind of segmentation can you do with narrative or transcription data – it will be limited to the meta data in the header
Richard: We will need to use natural language process and apply some ontology
Ioana: If you apply natural language you will be discovering new meta data and you hope the meta data is correct. This is not what the current system is doing unfortunately. We need to use ‘structured data’ vs ‘tagged data’ to describe what we are doing
Richard: They don’t want to do it this way
Ioana: Yes it is dangerous e.g. discovering meta data for the purposes of a Google search so you can find information later on. Do we want to rework the data at all. We need to be aware of the new discussions going on at ONC and PCAST so as our project progresses we will understand what they are envisioning
Serafina: Is there an update in SHIPPS PSS to reflect that statement
Ioana: We don’t want to spend too much time reworking the PSS. We are identifying a moving target
Serafina: I spent a considerable amount of time rewriting another project scope statement (Records Management and Evidentiary Support) as that discussion with the EHR workgroup came about, it also has a tremendous overlap with the SHIPPS
- They are really talking about the need within an EHR to have a data profile to accompany the functions
- The Records Management and evidentiary support need it for all their functions; their model needs to point to some concrete data requirements
- There is a convergence of a number of groups and that project in of itself is designed to create awareness and collaboration across workgroups where their work has some impact on the functions of an EHR system
SHIPPS Co-Sponsorship – emails exchanges between Bernd Blobel and Ioana Singureanu
(please note that for clarity I’ve included sections from the emails)
Bernd's email: I'd like to comment the project description in the way of expression my surprise about the objective to define a DAM. At least from the perspective of an architectural approach, there is the Security and the Privacy Domain, the domains of medical specialties, the Administrative Domain, etc., thereby being aware that domain can be separated into subdomains or aggregated to superdomains according to the definition of the system in question. This was also the reason for combining the Security DAM and the Privacy DAM to the Composite Security and Privacy DAM. Usually, a domain can be distinguished from the other one by a different ontology (the domain ontology). This usually doesn't mean that any project would lead to a new DAM, as a new scenario is represented by application ontology, but not by another domain ontology. In that context, I'd like to refer to Domain dimension of the Generic Component Model
Ioana’s email response: I believe that Fig. 1 in the project scope statement illustrates the contents of the domain analysis; we are not planning to rework the contents of the privacy policy but to describe the subset of data that would be extracted from an EHRS in order to support privacy and quality measures. We are also looking at the level of maturity of a EHRS that would support the needs of the enterprise in terms of privacy and real-time quality reporting. We are planning to rely on the previously defined DAMs to derive information collection and reporting requirements http://gforge.hl7.org/gf/download/docmanfileversion/6062/7902/HL7ProjectScopeStatementSemanticHealthInformationPerformanceandPrivacyStandard20101214.docx
Discussion on emails:
- Ioana: His question is; are we building another layer on the DAM. Ioana clarified that SHIPPS is using the existing DAM to qualify the maturity of the data the structure and encoding and the need for high quality EHR data to do the data segmentation. In this regard we are relying on what we already learned from having done a DAM and we are layering on top of this. What we are doing net new is we are examining those quality measure statements from HEDIS and NQF and SAMHSA and we are looking at what information precisely is required to automate the evaluation of these measures
- Richard: The quality of the data means it can be referenced to a canonical format e.g. SNOMED and it has attributes that are tagged in a standardized way
- Ioana: Basically we have the concepts related to standards and where possible – describing this to be mature canonical data applicable to automated data segmentation - anything that falls short of that will have issues e.g. you may have to do some mapping or buy data language processor because the data is not ‘out of the box capable’. If you are not at the highest level of maturity the risk is you may share too much information or not enough. This was made pretty clear on the scope statement
SHIPPS Project Outreach Summary
- Serafina is to close loop with outreach to MITRE Corporation
- Mary Ann to confirm publishing of SHIPPS with Don Lloyd e.g. expected publishing date ensuring all timelines are met e.g. TSC approval and NIB timeline
- Richard: we need to talk to EHR working group, to Crystal Kallem involved in the Public Health Reporting
- Serafina: Many of her comments did question if we were including research and Public Health. Several follow ups and as I didn’t hear back I took PHER of our list as and interested party for the PSS
- Moved security from co-sponsor to interested party – suggestion from both John Moehrke and Bernd
- Structured Document a co-Sponsor
- Richard to speak with EHR co-Chairs on Friday namely John Ritter – with respect to the Records Management and Evidentiary Support project. He will ask again, mentioned that Jim Kretz was participating with project and is possible liaison for the project
Collecting Data
(This discussion is in relationship to the SHIPPS Project)
We are eventually going to define granularity for the clinical data we are collecting and applying SNOMED
Ioana: Yes or in some cases ‘you are collecting a lot more information than we care about’. The information we care about in certain instance would be described in a data profile for a certain function. In other words you may carry a lot of information in your application but you need to make sure you carry this information for particular function if you want it implemented
Richard: detail clinical model
Ioana: the question is would we focus on depth of information (granularity) first or breath of information
Serafina: I would hope it would be depth first, but I am not sure
Ioana: Either way we could go through multiple iterations
Richard: we are all going to have to do that, if we are talking about privacy ONC is interested in being able to able to demonstrate protecting sensitive behavioral information – so we are talking about collecting that information in various settings not just specialty care. At SAMHSA we are coming up with minimum data set ideas. Now we have to marry privacy concerns with that when that information is shared
Jon: By minimum data set are you talking about granularity statement or breadth
Richard: It could be both, think about LTC, what information do you need for nursing care
Jon: I think we should be doing both not one before the other e.g. sampling of depth to understand what issues lurk and then also do breath to illustrate the ability of the infrastructure generality
Richard: In the context of what the ONC is talking about it could be a combination of a Beacon community and the VA and my ptsd information being treated in the community vs the VA hospital. This is a high priority if my ptsd is known and the consequence to my career in military. This is of interest to the US realm
Jon: If there is a domain of interest that is considered low hanging fruit and is beneficial with ROI this is what we do first. This is how standards are shaken out by people having the right incentives and ROI
Richard: If we dig into clinical ontology e.g. what pieces of information exist the EHR in a standards based way e.g. we can look at how these entities can be grouped in a hierarchy and subdivided according to similarities and differences. By using an ontology approach we don’t have to get into everybody’s legacy system. We don’t need any legacy system; we just work with standards ontology
Jon: Both (depth / breath) are canonical no legacy translation, central standards e.g. everybody is using the same standard. If we demonstrate that the ontology of that standard is rich enough then people will be motivated to map to it
Richard: There is nothing else out there now
Jon: Standard wise, people will be motivated to map local concepts
EHR Functional Profiles
(This discussion is in relationship to the SHIPPS Project)
There is content missing in the functional profiles – we need to come up with data profiles for the EHR. They have functional profile of EHR functional model constraints – EHR functional model talks about functions and HL7 talks about data and content
- Ioana: We are talking about the data requirements based on a minimum set of business requirements based on quality measures
- Serafina: Talking about quality measures from data that is captured in EHR, we are not talking about capturing data separately for the purpose of quality this is supposed to flow directly through the workflow of the normal documentation process that is required to substantiate and to document the delivery of care. And when you talk about that, while there may be difference across organization across jurisdiction etc. there are policies that govern what comprises the documentation
- Serafina: Firm relationship –Records Management working group - what they are talking about is promoting their functional profile that is inherent in the EHR functional model.
- Ioana: The requirement for record management is represented by their functional profile
- Richard: we want every piece of information that you collect for quality to have clinical use
- Ioana: or directly reused, for clinical decision support or for documentation then you take that same data and focus in on that which is required by quality measures but it is an automated process
- Richard: Through ontology or modeling you need to identify what you need to ensure quality data
Reality Check
(This discussion is in relationship to the SHIPPS Project)
Serafina: In the yesteryears when data was submitted to quality measures people would look at the data manually e.g. what is in the record and then abstract the data out, they would not come up with new data. So what we are talking about is having the data structured and encoded in a manner so it can be pulled in an automated fashion. The data is in the record to begin with by the virtue of the normal workflow of documenting the delivery of care
Pat: What I heard from previous comments we want to identify the quality data point or data elements we won’t be successful that way. We need to work off the known clinical data set, and identify what we can gleam from it for quality measures
Ioana: It is the same data, but it the DAM we are identifying the requirements, you need this much data in order to be able to reuse the clinical data
Pat: If we come up with a slew of quality data elements that have little clinical benefit we will be lucky to get any accurate data if we get any data at all. Quality metrics must be derived from information collected because of its clinical relevance
Ioana: the data to compute the quality measure is clinical data e.g. the quality measure itself will say how many people receive this diagnosis for this medication within 30 days. All the data that you use to compute this statement is clinical data. The challenge is to declare generic criteria, what are the criteria and algorithms for measurement
Richard: Using a diagnosis from billing the problem is we are using cruddy data to come up with quality measures. We need to get the data from an EHR not billing record e.g. as soon as people know they are being measured on something they will fix their systems accordingly
Richard: Real or imagined consequence will play out for how we use quality measures; we need to create a whole new Industry dynamic where people are concerned about how they look and also we need to have a system dynamic enough so the measures can’t be gamed all the time
Serafina: As measures change or become more refined and if organizations can’t reflect these measures and the data is out of sync for reporting e.g. HEDIS reporting changes from one year to the next; there is a level of scrutiny that is applied to the data. This will have to be factored into this whole approach. People are changing their systems and how they are reporting. Every organization wants to achieve as high quality as possible
Richard: we want to move to outcome measures as soon as we can and so we are not just talking about process measures
Meeting was adjourned at 3:05 PM Eastern