This wiki has undergone a migration to Confluence found Here
October 13th 2009 Security Conference Call
Jump to navigation
Jump to search
Contents
Security Work Group Weekly Conference Call
Meeting Information
Attendees
- Bernd Blobel Security Co-chair, absent
- Steven Connolly
- Tom Davidson
- Mike Davis Security Co-chair
- Suzanne Gonzales-Webb CBCC Co-chair, absent
- Allen Hobbs
- Don Jorgenson
- Glen Marshall Security Co-chair
- Rob McClure
- John Moehrke
- Milan Petkovic
- Pat Pyette
- Scott Robertson
- Ioana Singureanu
- Serafina Versaggi Scribe
- Tony Weida
Agenda
- (05 min) Roll Call, Approve Minutes & Agenda
- (85 min) Deadlines for January Ballot: Milestones
Upcoming deadlines for the January Ballot
- Notification of intent to ballot: Sunday, November 1
- Final deadline for draft submission: Monday, November 30
- January Ballot 2010 opens for voting December 7
Action Items
1. Please review the Use Cases on the Security wiki site.
- Ioana has updated the Use Cases on the Security wiki site http://wiki.hl7.org/index.php?title=Security_Use_Cases
- Changes include additional diagrams and a new "Information" heading for each use case. The Information section shows (graphically, using a class diagram) which classes and associations are required to support that use case.
- There are two use cases that do not appear to have explicit information requirements:
- 1.5 Enforce authenticity of legal healthcare documents http://wiki.hl7.org/index.php?title=Security_Use_Cases#Enforce_authenticity_of_legal_healthcare_documents
- 1.6 Enforce secure exchange of health records http://wiki.hl7.org/index.php?title=Security_Use_Cases#Enforce_secure_exchange_of_health_records
- Analyze to ensure the information model is covered (all classes are associated with one or more use cases; all use cases reference classes found in the information model).
- Cross-reference the Security DAM Use Cases to Classes
2. For those who would like more time to discuss the DAM and ballot process each week, Mike will schedule an additional meeting for Fridays, 11 AM EST. This meeting is optional. Tuesday’s meeting will continue to be the primary forum for discussion and Friday’s meeting will be used only to elaborate on Tuesday’s discussion if necessary.
Minutes
Today’s call was used to review milestones, agree on a deadline for completing the use cases and highlight any additional work for the Security DAM. Outstanding items include:
- an inventory of use cases
- any new use cases that address classes in the model that have not yet been covered or that contribute new vocabulary required to support health care security and privacy
- identify gaps – classes within the DAM that are not associated with use cases, and significant use cases for which there is no class in the DAM
- The DAM will include commentary in the introduction alerting readers to the fact that the focus of the use cases contained in the document are specific to health care and illustrate the classes contained in the information model. Security use cases that are applicable across industries/domains have not been included in the DAM.
Discussion:
- Glen: Some of the use cases are very generic – e.g., security administration is the same idea across industries. Do we want to focus on only those use cases that are health care relevant?
- Yes – focus should be on only those use cases that are health care specific. We do not want to reinvent the wheel
- Glen: we need to be prepared to respond in public comment to people who are not aware of what all the generic issues are, so at least we don’t get hit with public comment and then have to disposition a number of use cases that are generic
- Ioana: the reason for calling out specific use cases is to have a basis for the domain analysis. We can then reference the use cases we have focused on where we drew our information and terminology analysis.
- Mike: Scope – We can’t create an exhaustive list of use cases. What is we need is to do just what we did for the RBAC Permissions Catalog – identify use cases that support the permissions that being presented, use cases that cover all the classes in the information model and no more. Using the RBAC Permissions Catalog process: use case sites the class or the class is apparent from the object being referenced. We identify in brackets, highlight the class name. Look up a use case from a class or a class from a use case.
- John M: when we go out to ballot and start to engage the wider audience, how do we express to them that everything else has already been dealt with. How do we disposition quickly comments to indicate that yes, this issue has already been dealt with? This is not something that we have to deal with now, but have to be ready to scope or present the ballot material carefully.
- Mike: what we’re missing here is a process document – like was done in RBAC. But we don’t have time here. This is an informative part of the ballot; the use cases are not normative. We’ll just have to qualify them, to say this is not exhaustive; it is supposed to be representative, etc.
- Ioana: this will follow a similar process we used for the Privacy Consent DAM. We included the use cases, and when comments came in during ballot review that we hadn’t accounted for a specific use case, we referenced them to those use cases that were taken into account, so I suspect this will work here too. The question how much time is needed to review the use cases already on the Wiki and to provide additional use cases if necessary. Another week, two weeks?
- Steve: Mike just submitted three additional use cases.
- Mike: we need to make sure we have the uses cases that we require. What is the coverage of the classes in the information model with the use cases that we currently have. We don’t know what the gap is. For instance, use case #4 covers class structural role… the objects in the information model. When we get done, we have use cases that have covered all the classes
- Ioana: there are two steps. 1) allow participants in the Security WG to review and provide more input if necessary. This is where we want to limit the time 2) once we have identified and prioritized the use cases in scope, those use cases will be elaborated upon, including identifying gaps in the information model, elaborating them with process flows, etc.
- Mike: we don’t need solicit to new use cases until we know if there are gaps. So first step is to analyze the use cases we have to see what classes are covered and then create use cases to fit remaining classes if there are any.
- John M: the real purpose of a use case is to identify gaps. To say, this is a gap, therefore what’s the use case…that happens, but that’s reverse engineering
- Mike: we have a draft information model. We need to have use cases for the classes already identified. The gap I’m talking about is the gap between the use cases and the classes that are covered in terms of the current model.
- John: I was reacting to excluding use cases that do not have an already identified gap
- Mike: I’m actually not too interested in those use cases. We have to have a cut off date. I propose that we set a cut off for new things, and put those new things, if they occur, into the new action pile for next time. It’s more important to focus on getting this ballot together. The leadership has to make the decision. Focus on getting the work done, stop the development. Here’s what we need by a particular date. I would be happy if we could see that we have use cases that cover each of the classes represented in the information model. Not one use case per class, but a use case that bundles several classes together.
- Ioana: one thing that raised some questions in Atlanta. Can a constraint policy be considered to be logically equivalent to a user data consent directive? You and Bernd seemed to have different views. Have you had the chance to discuss this with Bernd? If we’re going to look for coverage in this information model, can we assume that the constraint policy, while it might be an organizational policy, might also be used to express a user’s consent directive?
- Mike: this is a tough question. I’ll ask Glen and John to weigh in on this. The meaning of Consent Directive floats around a lot. The thing we’re talking about here in Security policy is the authoritative policy that’s been agreed to by the organization, the policy that they’re going to enforce. This might be different than what the Privacy DAM might have for what a patient is requesting. So if consent directive means the authoritative policy, and that’s what’s in the constraint catalog, that’s an authoritative one, and the words are equivalent, otherwise they’re not.
- John: I would caution against trying to pigeon hole consent policies in any particular area. In certain organizational constructs, a consent policy can do far more than what it would do at an organization such as the VA.
- Mike: We have to be careful with this vocabulary. We’ve been using the constraint catalog as the mechanism for expressing what these policies are in terms of the Security system. Unless we’re in agreement that means the same as a consent directive...
- Glen: could we qualify the term policy here as Security Policy?
- Mike: this is the Composite Security and Privacy policy. It is the composite organizational/jurisdictional and privacy policy that is presented to the engine for enforcement.
- Ioana: would it be fair to say that in some situations, a specific patient’s consent directive could be considered, and if the organization allows it, they would issue another type of constraint policy? So from an enforcement standpoint, if I allow my information to be exchanged between VA and KP, and I express that consent, then it would be enforced like a constraint policy would be enforced?
- Mike: The representation of that would be a constraint policy and then it would be combined. The overall jurisdictional, organizational and privacy policy and security which is represented here becomes part of the composite policy. On the privacy side you may have a different view…
- Glen: I see there is another domain that emerges when you want to talk about convergence of privacy policies and security policies because of the different subjects that are involved. They do wind up being a composite policy. But do we need to break these up to show the interactions and necessary messaging that leads to the construction of a unified policy?
- Mike: I hope not, what we’re showing is the intersection of those things and not describing how those are created or the management processes that put them together. As we’ve been describing in the SOA models, a patient expresses a preference to the privacy management capability, and that is organized and accepted and then you have the accepted policy. That then has to be passed to the security management service, which is already dealing with organizational security and jurisdictional security and privacy policy and then harmonized and incorporated into that composite set. We’re covering the description of that in more the SOA work, and this particular information model, SOA is also creating their own DAM. They’re basically taking this in as the authoritative overall picture and then providing the context that you’re looking for there.
- Glen: as long as the way they’re constructed somewhere, you’ve satisfied my concerns.
- Mike: this gets that kind of thinking into the right group. Many of the same people in this work group are also participating in the SOA work group
- Ioana: PASS-Alpha will also be using this work as well as the Composite Privacy DAM as they are responsible for putting together an entire conceptual model of these areas.
- Mike: timeline: was established in the project scope statement that just was approved by the steering committee in September, but we had the scope statement approved a long time ago. We’ve been working against those deadlines. What we have to recognize is that CBCC and their information model is in the second phase, they’ve just balloted in September. We’re trying to get to a harmonized information model but we can’t get there in one step. We have needs to take the Information Model as soon as possible and start looking at some Domain profiles for what value sets need to be put into those. Those are going to have immediate use in other activities. Recognizing that the May meeting will likely have limited participation (because it is out of the country and many won’t be able to travel), this basically leaves us with September for a harmonization ballot where we either formally bring together Security and Privacy into one model or we clearly identify and incorporate any new work that comes out of the ballot process in January. In September we want to ballot whatever road we choose – identify, abridged classes, or a single unifying model, we ballot this in September. The urgency comes from the fact this material is needed yesterday, even if the ballot is rejected where we get information necessary to carry it forward potentially and re-ballot in May. All of the SDOs are being pressured to produce something quickly. This DAM is very constrained and is also based on international standards already. Many of the classes come from exiting standards. We can argue that these are incontrovertible. Because they have been brought in from ISO; and others are influenced by the Privacy work. If we focus our attention on getting through the obstacles that we have, we can make a January deadline for DSTU. It’s an aggressive goal, but we’re not going to get there if we push the deadlines out. The ballot is in January and the HL7 process is such that we have to submit a notice of intent to ballot, the draft material, and by Nov 30 is the final deadline for the draft submission. Ioana is using the Privacy DAM ballot as a template for this work. We have the information model, the use cases, I don’t know what more that we need.
- Ioana: we will be elaborating the use cases and include the interactions that may be required to support those use cases. This will be background information. We’ll have detailed documentation for all the classes and attributes in the information model so there will be no ambiguity. We’ll also have an analysis of the terminology that is required to representing the coded concepts and sample value sets in user-friendly terms. We will not do the profiling that Mike is referring to until we have identified a target locale for localization.
- Mike: the work that needs to be done is known and scoped and is not that much. We have a significant amount of time to continue to work on it. November 30, we make it or we don’t make it.
- John: that’s 4 or 5 weeks and with a weekly call, I doubt we can hit it. More than likely the useful comments are going to come from the same committee members who would be participating
- Mike: I’m more than happy to have another call on Fridays, 11 AM EST if that would be useful
- John: My request was not for another meeting. It was why not shoot for the next ballot pool as opposed to the Jan ballot pool?
- Mike: Does it make sense to include some discussion of the Security model in the SOA Work Group? SOA is also going to ballot in January which to a certain extent overlaps with this effort, leverages it, etc.
- John: I thought based on discussions in the ArB that DAMs were not required to go to DSTU. They were surprised when we pushed the Privacy DAM through the ballot process.
- Ioana: You are correct, it is not required. The main applicability of DAMs is they allow us to constrain other standards. So for instance, XACML becomes a way we’re exchanging access consent, it is important to have a basis upon which we constrain XACML to be used in a certain way.
- John: The DAM by itself does that, it does not have to be balloted to do this.
- Mike: Say for example, that in future versions we would like the profiles in XPA for the various OASIS standards to refer and build the XPA, and constrain other standards to this model. Unless it’s balloted, that’s not going to happen., …constrain other standards.
- Ioana: if it’s balloted informative, it does not have the same status outside of HL7 as you’d want. So If OASIS is to base their work on an HL7 standard, they would probably be very reluctant to do so that has been balloted informative.
- Mike: I wish we had had an information model in XPA, we winged it as best we could. This will put that kind of work on a much more solid footing.
- Ioana: The January deadline allows us to make progress. If we don’t make this deadline, there is another opportunity to ballot. But there is another value to a ballot. A lot of people who have an interest in something (Security in this case) who do not attend the work group calls, a ballot allows people who are not able to attend the calls the chance to have weigh in and have comments. Without balloting we leave a lot of people out of the process. The value of this process is to leverage the expertise of the people who can call in but also those who cannot call in as well. Whether we make the January deadline or another, it matter, but the idea of skipping the ballot entirely doesn’t make sense. There is a significant amount of new work here.
- Pat: I think John’s question was not whether to ballot but the value of a DSTU ballot. I have some issues with this as well because at the conceptual level, it will be relatively unstable for a few years at least. Where you want to stability and where you want normative ballots is when you can apply constraints to it at the platform independent layer.
- Ioana: I think that the ArB considered the question quickly and dismissed it. If the PASS project is supposed to create a conceptual model, from which to derive a platform independent model...they are all interdependent models, and if one is draft for comment and it’s the earliest one of them all, then all the subsequent models are considered invalid. We’re still discovering where things are going with the SAEAF based project, and considering the domain analysis as the basis for profiling other standards, gives it sufficient grounds for a DSTU level. It makes more sense if you have to have model driven architecture, that it also makes sense for an alpha project, for SAEAF, to need a stronger conceptual model on which to base the platform independent model. If everything in the conceptual model is hand waving how do you know that your PIM is solid?
- Pat: there’s a significant difference between hand waving and having a ballot for comment that has gone through peer review and has comments
- Ioana: in draft for comment you don’t have to address any of the comments you receive and with a informative you can approve it with 40% negative
- John: I don’t mind going through DSTU, I agree with the comments. It’s just the timing that has me concerned
- Ioana: it is true, this is the shortest cycle we’ve ever had and everyone is scoping with the same kind of time crunch
- Mike: we do have an active convergence of experts working on this now and there is no guarantee they will be available later
- Steve: In the last few minutes I could spend a minute explaining the two use cases. Just wanted to point out this is graphic representation for the two core use cases for the Security DAM. The first use case goes from requester to resource, and second use case proceeds in the opposite direction and assures the authenticity of the resource that’s provided to the requester. A lot of people focus on the first use case but the second use case doesn’t get as much attention.
- Pat: does this still apply for a put, for the create of a resource? Would it not flow both ways?
- Steve: it does flow both ways
- Pat: enforcing authenticity – you would want to have authenticity from the source to the destination always and where that’s enforced is the question. If I’m a requester please store this and here’s my signature to prove that it’s from me, is the enforcement on my side or on the side where the resource gets stored so that it can now attest that yes, I got this from Dr. X and when I deliver it to someone else that it has not been changed in any way.
- Steve: This will be implemented from a Security DAM, but the Security DAM is not where this will be spelled out
- Pat: I was just reacting to your assertion that enforcement of authenticity flows from resource to requester. I’m just saying that it probably goes both ways depending on the operation you’re going for.
- Steve: there is some symmetry here. The first use case authenticates the requester; the second use case authenticates the resource in very high-level terms. I am making the assertion that the two primary use cases that scope the Security DAM are #3 (Enforce privacy policy and consent directives using access control) and #4 (Enforce authenticity of legal healthcare documents). Are these two use cases the appropriate use cases that constrain this DAM?
- Mike: we should leave it at that question for review of the use cases so CBCC can proceed with their call. I have a question as to what document assurance has to do with a security policy for access control, but let’s hold that.
Meeting was adjourned at 2:05 EDT. No significant motions or decisions were made.