October 6th 2009 Security Conference Call
Contents
Security Work Group Weekly Conference Call
Meeting Information
Attendees
- Pat Christensen
- Bernd Blobel Security Co-chair, absent
- Steven Connolly
- Tom Davidson
- Mike Davis Security Co-chair
- Suzanne Gonzales-Webb CBCC Co-chair
- Allen Hobbs
- Rob Horn
- Glen Marshall Security Co-chair
- Rob McClure
- John Moehrke
- Pat Pyette
- Richard Thoreson CBCC Co-chair
- Ioana Singureanu
- Serafina Versaggi Scribe
- Tony Weida
Agenda
- (05 min) Roll Call, Approve Minutes & Accept Agenda
- (05 min) Announcements
- (75 min) Two proposed use cases discussion
- Security use case 1.10 Accounting for Disclosures – a new use case submitted by Harry Rhodes (AHIMA). Discussion postponed because Harry unavailable today. This use case may be important for the PASS Audit Service specification but would require some information modeling. Feel free to suggest changes/correction/additions to the summarization.
- Security use case 1.9 Negotiate Privacy Policy – existing use case posted prior to the Atlanta meeting but not discussed in joint Security/CBCC session. Topic for discussion today
- (5 min) Other Business
- Security use cases can be found on the Security Wiki
Announcements
Mike: Outcome of today’s Steering Division Meeting
- Electronic Vote and vote in the meeting on the Scope Statement for Security DAM and the project is officially approved.
- Mike forwarded the notice from Lynn Laasko to Suzanne.
- Security DAM is now a formal project. Goal is to develop a DSTU for the January cycle. Work needs to be focused on this January deadline.
Discussion
The upshot of today’s meeting was to focus the group’s attention on the development of use cases for any classes that might be missed in the Security DAM so we are only working on those that provide adequate coverage for the DAM. There is not a lot of time to complete this ballot, so the set of use cases must cover the information included in the Security DAM and enable the Privacy DAM as well. While other use cases are interesting from the perspective of the mechanics of health care interfaces with the Security system, they are not helpful in creating the Security DAM. This includes the Negotiate Privacy Policy use case on today’s agenda.
The following is a summary transcript of the discussion. In particular, note the definition for negotiation as defined by Mike Davis, since this has been the topic of discussion in many calls over the past couple of months.
Accounting for Disclosures Use Case We had not planned to go into the Accounting of Disclosures use case because Harry Rhodes (who presented this use case) was unable to make the call. But Glen had concerns about the positioning and ownership of the Accounting of Disclosures use case:
- Accounting of Disclosures use case is really two use cases
- Audit Data Collection: which is within the Security Work Group’s purview
- Disclosure Accounting which has application aspects that are more properly expressed in the Patient Administration Work Group. This not a tool or building block
- Mike: does this have anything to do with the use of Audit to support Accounting of Disclosures?
- Glen: Yes, this is the Audit Data Collection part.
- Anything that hints of direct access to security (audit data) needs to be challenged. We need to deal with some services that supply data to a filter or consolidation.
- PASS needs to focus on the Services required to record audit data and to obtain audit data and pass it to applications. It should not deal with the reporting.
- HL7 Audit Data Collection cannot fork from mature efforts concepts in HITSP TP15, IHE ATNA, RFC3881, and forthcoming standard out of ISO.
- Ioana: What Harry is suggesting not a new standard but to support Accounting of Disclosure in a similar logging mechanism to log the disclosure of information.
- Glen: You also need to deal with the transport associated with this.
- Ioana: in the Security DAM analysis you’d specify what the Disclosure log entry would look like rather than the mechanism by which the disclosure is stored in a log file someplace.
- Glen: there's zero reason for doing anything like this other than referring people to the existing documentation.
- Ioana: as long as the documentation includes things like purpose of disclosure as part of the log entry, this makes it easier. Want to check that the data elements required by HIPAA are supported.
- Glen: Yes they are. See Link to HITSP TP15.
- HITSP TP15 is a US domain. Should we be referencing a US-Only construct?
- Glen: All of the stuff that’s been done relative to the original RFC, by the various standards committees, has been internationalized. HITSP’s TP15 starts to get into the US domain and binding it to US policies. For global references, refer to IHE ATNA, DICOM, RFC3881 (Security Audit and Access Accountability Message XML). But anything that references HIPAA is US Domain, which is why TP15 was mentioned.
- Pat Pyette: Many jurisdictions outside of the US require this accounting, so if this use case is generalized, it will be universal.
- Mike: May be applicable to take the use case to PASS Audit group to ensure they have the constructs and capability to deal this particular interface with the Disclosure system.
Negotiate Privacy Policy Use Case
- Mike: This use case is very much in line with what SOA Authorization is doing. They have some workflows and Pat’s been working on this stuff. Within the scope of this particular effort for the DAM, what do we expect to get out of this use case? It seems like this manual process and has to accommodate the Privacy DAM and some kind of translation into the Security DAM. There’s some indication for joint crossover vocabulary. But otherwise, there’s nothing special in a manual process use case compared to an automated process use case with respect to the information model itself. The underlying information model doesn’t care.
- Rob McClure: What manual makes interesting are two interfaces to the DAM. One which is providing to the manual process, the current state of the automated process, so that when they’re doing the manual negotiation they know what the automatic state is or would be.
- Mike: But that seems out of the scope of the DAM.
- Rob McClure: It’s an interface to it, part of the behavioral model. Another thing might be to capture the results of the manual negotiation process because that might change the state.
- Mike: The patient is making a manual request in writing or verbally and we have these models and workflows in SOA already. There’s a negotiation with what the organization will accept in consideration of organizational and jurisdictional policies and then a translation of the patient-facing vocabulary from the CBCC Privacy side to the run-time rules engine in the Security model. There’s an interface there – a mapping of terms from one domain to another (privacy to security) this defined the thing call the Constraint Catalog. This use case informs that. The manual or automated aspect doesn’t affect the use case. We would have both.
- Rob McClure: It might be worth mentioning. Some unstructured human negotiations come up with bizarre policies. These may be hard to represent.
- John: If all we need to do is indicate that we will be modeling this cognizant of the fact that we out-of-band manual processes; that may be all that is asked for here. It might not affect the Security DAM, but it may have been envisioned this was done through all automated processes. This could expose the fact that we knew we would have to have manual intervention capabilities. This may be all it takes – just to recognize that the other side of the interface to the Security DAM might be a manual process.
- Pat Pyette: Is this just an appropriate to track this as an assumption for building the DAM or functional models, there is not assumption and there may be manual processes in the creation and negotiation around Consent directives or any policy?
- Mike: It’s a forgone conclusion there will be manual and electronic/automated processes. The reality is that not all clients will be responsive to an electronic solution. The key thing this use case gets into is the interface between the Privacy DAM and Security DAM. There is a project to create a Consent Directive based on CDA R2. It seems like whether it’s manual or automated processes, there will be some effort to use similar vocabularies to express those Consent Directives. We’re interested in the interface of the patient’s accepted Consent Directive that will be brought into the Security system for enforcement. Not sure how we can ensure that we won’t miss vocabulary unless we have a notion of what’s in a Consent Directive.
- Rob McClure: It’s not so much as a difference between what would happen in an automated environment versus what would happen if two people are talking to reach some conclusion, it’s that the result of whatever that process is a potentially unique (to that patient) representation and that would mean that there would be something unique about the permissions that would need to be enforced. At one time, it was thought that the permission catalog could be rendered as a SET from which other things outside the set wouldn’t be allowed. And that was a SET because 99% of the world would choose from what would relatively be a small SET. But if you allow for these types of negotiations, you allow for the possibility that it will fall outside of the existing permission catalog set. This is the most significant point: These negotiated “things: may not fit the vocabulary or structure of the DAMs.
- Pat Pyette: Question is, would this have a hope of ever coming to reality? Jurisdictional policies, organizational policies are in place. To have a patient come and say I don’t like any of these policies. There’s far more to having a patient creating a new policy with the Security officer.
- Rob McClure: What make’s it feasible is that it’s a mutual negotiation – policies that are negotiated can be constrained to ones that can be represented.
- Pat Pyette: It’s not just the patient sitting with the Privacy Officer negotiating. If that fundamental policy does not exist, the Privacy Officer would never be at liberty to the exception. There are a number of processes that would have to take place to get to that point. I’m having trouble understanding where that initial negotiation between Privacy Officer and patient would result in something for which there is no policy.
- Rob McClure: Agreed. There’s a fundamental fork being discussed that the use case doesn’t adequately express. This is not a perfect example: the rule/policy states: all physicians directly involved in care have access to all a patient’s records. Now the patient requests, I don’t want this one physician to have access to my records. This is a restriction on the All Physicians rule, to say something specific for a particular physician. If this is allowed within the policies, you can restrict to an individual physician. That would be unique, but it would only be unique in that it’s just a restriction on an allowed policy. What this use case suggests is that if one could resolve both of these individuals satisfactorily (Privacy Officer and patient), the physician gets access to some but not all of the patient’s records. But that’s not in the allowed policy, because the first sentence of the use case states that the rule is any direct patient care physician has access to all the patient’s records. So what’s different about this use case is that one could resolve both individuals satisfactorily, but that resolution isn’t allowed in existing policy. The major question brought up by this use case: this isn’t simply a restriction on a type. The use case rule says all and you can’t restrict all without changing something. If that is what this issue is about, and it doesn’t matter whether the negotiation is manual or automated, it would be important to identify this as a key element of this use case and figure out how to support it.
- Mike: I don’t know exactly where this is going. The interesting part would be what portion of the Privacy part maps to the security part. It doesn’t make sense to spend time on something that is very unlikely to present (less than 1%). Were trying to build vocabulary that is useful for interoperability purposes. That’s going to rely on the two DAMs and the underlying vocabularies that support them, including things like the HL7 Permissions Catalog. The security system cannot enforce things that it doesn’t know about. Theoretically those things are defined in the HL7 RBAC Permission Catalog at least for health information objects related to the EHR so we’ve hooked our wagon to this. It doesn’t say that someone can’t create proprietary objects on their own; our efforts are going to get bogged down if we are trying to solve that problem for the world. We should spend out time on 99% to use the vocabulary and information models that make sense and that are going to be enforceable and are going to provide interoperability support. We’re parsing this very tightly. All security people would say we’re not writing the policies here. The security system can enforce any reasonable policy that it can write a rule for to control the objects it’s protecting. It’s irrelevant about what policy is written. We’re only trying to define only the vocabulary to support the realm of possible policies regarding patient preferences and the objects that the security system protects. I’d like to take this discussion off the table. It won’t help us to get the ballot out. I’m looking for things that contribute to an understanding of whether the information model is complete or whether we’re missing components. Let’s keep this focus. We don’t have a lot of time.
- Steve Connolly: The automated policy resolution use case deals with conditions where all of the attributes of the classes are known. In the negotiation case, not all the attributes are known. This is the clear difference.
- Mike: I define negotiation differently. I define negotiation as the discussion of the patient’s presented draft, a proposed policy, which is not yet accepted, against the organization’s and jurisdiction’s policies. The negotiation is to determine the contract, what the organization decides to enforce. There are two sets of policies: the proposed patient’s policy preferences and the harmonized one that becomes part of organizational policy because they accepted it. They may have rejected parts of it. How that happens doesn’t matter. In the end, there’s a policy stored that is authoritative.
- Glen: if there’s a way to say this one is special in the DAM, you can deal with the 1 in 1000 cases by saying those don’t interoperate, don’t realize, resolve manually). The other seen most often is negotiation with Court orders. In those negotiations, the Privacy officer is in a weak negotiating position. They can point to telling the judge you can do that I can’t treat the patient and they’ll die. There isn’t much time to negotiate and you need to accept what the judge tells you to do. That’s a 1% case, and if we have a mechanism to deal with this 1%, that’s acceptable.
- Mike: If we’re going to dealt with this in an automated system, regardless. This court negotiation doesn’t add new information, it’s just another way of presenting an authority that has a say that potentially modifies a patient’s request. What has to happen is that if it’s going to be supported by an automated system, the policies wherever they come from if they’re deemed authoritative, will have to go into the store that the Security system draws from and the vocabulary comes from the information model. There’s a harmonization. There are three elements in this use case that are useful. First, The patient presents something – their preferences. The request is not authoritative. Second: It’s received by the organization and they consider it within the context of the patient, the care, their policies and jurisdiction. Third: they create the policies in the Security system that is going to enforce whatever the deemed policy is, we don’t need the details of the policies, but we have to examine these use cases from the point of view, does the DAM adequately represent the information needed to express those policies. For this use case, it’s simply a matter of mapping Privacy concepts into Security concepts. Nothing in this use case implies new vocabulary or that the DAM is insufficient. At the Working Group meeting we made a decision that we would not continually iterate interesting use cases, we would only create new use cases that provide adequate coverage for the DAM. We are not going to continue to produce interesting use cases; only use cases that will provide adequate coverage for the Security DAM. Come up with a set of use cases that covers the information in the DAM. These are interesting discussions about the mechanics of the health care interfaces with the system side, but they are not helpful in creating the Security DAM.
- Steve: there is a use case in the Privacy DAM for Break Glass – break glass fits somewhere in between these two use cases in that it is a symptom of a situation where there’s not chance of automated negotiations or no time for or manual negotiations to occur. So what you do is override existing policies to be able to access the information that you need. This is a use case that is including in the Privacy DAM already.
- Mike: We have a set of use cases and we should map these use cases to the existing Security DAM. For the classes that are not covered by a use case, we need to write a use case. Any new use cases brought forward should make it clear what is different about this use case; what is not currently covered by the current vocabulary and information model. What I don’t know is whether analysis has been done to show coverage of the use cases to the DAM. We only need to develop use cases for any classes that we might have missed.
- Pat Pyette: In addition to vocabulary, if it calls out new behavior or functionality, it would be valuable as well since the Security DAM feeds the PASS project.
- Mike: I don’t know if it’s the responsibility of the Security DAM project to develop those use cases. It’s the responsibility of the SOA, of Authorization Services to develop the use cases. For further discussion.
- Pat Christensen: Had been unable to comment during the call because her audio was not working. From the Privacy perspective, Privacy officers are not available in hospitals at all times to accommodate this use case. There is no option other than to abide by court law. If patient doesn’t want org to release information to law enforcement, we can’t abide by the patient’s request.
- Steve: some of the policies have been negotiated in advance. The Purpose of Use is normal treatment.
- Pat Christensen: Scenario: patient presents in the emergency room and is admitted to the hospital. Steve’s response is that this use case refers to “normal treatment” not an emergency. Pat’s response: but now the patient is admitted and is no longer considered “emergency”.
- Glen: This scenario speaks to a number of legitimate policy variants. Do these policy variants add anything to the conversation if we are constructing the work so that it’s policy agnostic?
- Mike: Goal is to develop vocabulary, not to solve the issues related to these scenarios. If the patient comes in and there’s information at another organization, it may be important to get those authorizations if they exist.
- Pat Christensen: Second and third paragraph of the use case is not acceptable, needs to be clarified.
- Mike: on next call we need a milestone that we can achieve. One of those milestones is a complete listing of the use cases that we have, and what’s covered in the DAM for those use cases. Perform a similar process to that was for the RBAC Permissions Catalog. Identify any gaps where additional use cases are needed. I applaud those who have been developing these use cases, but for any new use cases, we need to identify the part of the DAM where a use case is adding vocabulary that is not already covered. All the use cases definitely belong in the SOA realm and they should all be provided to SOA because they may represent a capability that SOA needs. We can’t anticipate what SOA needs, so turn over those use cases to them and let them use them or not.
Action Items
- Suzanne & Pat Christensen will go over the Negotiate Privacy Policy use case offline. Security can analyze Pat’s comments and determine whether the updated use case introduces new vocabulary.
Meeting was adjourned at 3 pm EST. No significant decisions or motions were made