This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

Difference between revisions of "FHIR QA Approach"

From HL7Wiki
Jump to navigation Jump to search
(2 intermediate revisions by the same user not shown)
Line 4: Line 4:
  
 
==May QA==
 
==May QA==
* Focus is on major design issues
+
The following is the set of instructions for performing QA for May.  Each reviewer will be assigned a single topic (see list of sections below) and will be asked to review all resources keeping in mind the criteria for that one topic.
* Review will be by category.  Each reviewer will review all resources looking at narrow set of issues
+
 
** Allows better detection of consistency/coherence
+
===General considerations===
* Each category will be reviewed independently by at least two people
+
* Try hard to look at every resource from the perspective you've been assigned.  If you're not going to be able to do that, coordinate with the other reviewers for your "category" and try to ensure that each resource gets reviewed at least twice.  (Coordinating early is wise.)
* We may look at producing "views" of the content that allow faster review for each focus area
+
* Review one or two resources in the first 24 hours so that you have a sense of what you need to do and how long it will take and so we can resolve any questions early in the process
* Review should take 20-30 hours
+
* Think about anything that would make the review process easier when we need to do the same thing again in July(You're not on the hook for July, though we'd certainly welcome your participation again.)
* We will seek volunteers from the HL7 community first and fill in with FMG resources as needed
+
* If you come up with other review criteria ideas, pass them back to one of the FMG co-chairs
* In-scope resources:
 
** These are posted for the benefit of those considering participating in the review process.  The names of a few of these may shift prior to the start of the review cycleFor more information, go to the FHIR website: http://hl7.org/fhir
 
** Provider, Device, Encounter, Group, Location, Organization, Patient, Substance, Supply, AdverseReaction, Allergy, CarePlan, DeviceObservations, DeviceManifest, DeviceEvent, FamilyHistory, Imaging[TBD], Immunization, Observation, Problem, Procedure, Coverage, Conformance, List, OrderRequest, OrderResponse, Profile, Provenance, SecurityEvent, ValueSet, Category, DiagnosticReport, Specimen, Medication, MedicationAdministration, MedicationDispense, MedicationStatement, MedicationPrescription, DocumentRoot, IssueReport, MessageRoot, IndexEntry
 
  
 
===Resource descriptions===
 
===Resource descriptions===
* Check that boundaries are clear - it's obvious what's in and out of scope
+
This is the text prior to the UML diagram.  (Ignore the stuff in pink that describes "status".)  It also includes the text following the Bindings and Constraints section (and preceding the Search Criteria section).  In other words, it's all the descriptive text presented in paragraphs.
* Is the scope unnecessarily restrictive (e.g. type of care, animal vs. human, etc.)
+
 
* Check that related resources are identified and that the boundaries are clearly identified such that there's no overlap
+
Things to evaluate:
* Is content appropriately targeted as "what do implementers need to know?"
+
* The resource resource boundaries should be clearly described in the first paragraph or two - it should be obvious what's in and out of scope for the resource
* Is content organized consistently across resources
+
** Are related resources identified with the  boundaries are clearly identified such that there's no overlap?  (It may not be clear what potential overlaps exist until you've had a chance to look at other resources, so keep this at the back of your mind about previous resources as you're reviewing new ones.)
 +
* Is the scope unnecessarily restrictive? (e.g. type of care, animal vs. human, etc.)
 +
* Is content appropriately targeted as "what do implementers need to know?" (We don't want information targeted at modelers, clinicians, etc. that's not relevant to implementers.)
 +
* Is content organized consistently across resources?  E.g. Are sections that cover the same sorts of things named the same way?  Are some resources missing sections they probably should have?  Is content presented in roughly the same order?
 +
* Anything else about the organization or presentation that would tend to lead to a negative ballot?
  
 
===Resource elements===
 
===Resource elements===
* Are names clear (intuitive) and consistent (following same patterns)?
+
Look at the XML representation (with the short descriptions), not the UML diagrams.  If necessary, click on an element name to see the full definition.
 +
 
 +
Things to evaluate:
 +
* Are names clear (intuitive) and consistent (following same patterns both within and across resources)?
 
* Do the choice of types allow for null flavors if appropriate/relevant?
 
* Do the choice of types allow for null flavors if appropriate/relevant?
* Is nesting appropriate (no unnecessary levels, grouping of repeating/optional elements)
+
* Is nesting appropriate (no unnecessary levels, appropriate grouping of repeating/optional elements)
  
 
===80%===
 
===80%===
* Do the list of elements seem intuitively appropriate as "part of the 80%" for the broad scope of the resource?
+
Look at the XML representation (with the short descriptions), not the UML diagrams.  If necessary, click on an element name to see the full definition.
 +
 
 +
Things to evaluate:
 +
* Do the list of elements seem intuitively appropriate as "part of the 80%" for the broad scope of the resource?  I.e. would most implementers (taking into account the breadth of scope for the resource) use this element?
 
* Where an element doesn't pass the "intuitive" sniff-test, are there clearly documented requirements and mappings supporting the element's inclusion?
 
* Where an element doesn't pass the "intuitive" sniff-test, are there clearly documented requirements and mappings supporting the element's inclusion?
 +
* Anything that seems missing?
 
* Do the provided mappings demonstrate validation against a variety of existing specifications?
 
* Do the provided mappings demonstrate validation against a variety of existing specifications?
  
 
===Constraints===
 
===Constraints===
 +
For background, read the "Cardinality" and "Constraints" of the "Resource Format" section
 +
 +
Look at the XML representation (with the short descriptions)
 +
 +
Things to evaluate:
 
* Are minimum cardinality 1 elements truly necessary?  I.e. is it not possible to have a sensible instance in any circumstance with the element missing?
 
* Are minimum cardinality 1 elements truly necessary?  I.e. is it not possible to have a sensible instance in any circumstance with the element missing?
 
* Is mustUnderstand set properly - element influences interpretation of other elements
 
* Is mustUnderstand set properly - element influences interpretation of other elements
 +
 +
Look at the "Constraints" section underneath the terminology bindings section
 +
 +
Things to evaluate:
 
* Are there constraints that ought to be enforced missing?
 
* Are there constraints that ought to be enforced missing?
 
* Is the constraint language clear?  Is the rationale obvious?
 
* Is the constraint language clear?  Is the rationale obvious?
Line 40: Line 57:
  
 
===Vocabulary===
 
===Vocabulary===
* If something is universally bound, is it reasonable to expect *all* implementations to manage with only the provided set of codes and to not have issues mapping their existing content?
+
For background, read the Codes and Terminology section
 +
 
 +
Look at the terminology bindings section.  (You'll need to click on the links to see the list of codes)
 +
 
 +
Things to evaluate:
 +
* If a binding is identified as "required", is it reasonable to expect *all* implementations to manage with only the provided set of codes and to not have issues mapping their existing content?
 
* Where value lists have been provided, are all concepts clearly mutually exclusive, or if not, defined in a hierarchy in which all siblings are mutually exclusive?
 
* Where value lists have been provided, are all concepts clearly mutually exclusive, or if not, defined in a hierarchy in which all siblings are mutually exclusive?
* Are we inventing new codes where appropriate codes already exist?
+
* Are we inventing new codes where appropriate codes are already defined in existing code systems?
 +
* Are code descriptions clear - will implementers understand what the codes mean
  
 
===Search Criteria===
 
===Search Criteria===
 +
For background, read the "Search Parameters" sub-section of the REST section
 +
 +
Look at the "Search Criteria" section at the bottom of the resource
 +
 +
Things to evaluate:
 
* Are search criteria consistently named and defined across resources?
 
* Are search criteria consistently named and defined across resources?
* Do search criteria fit the 80% - i.e. will most systems want to search by/support searching by this criteria?
+
** E.g. Are same criteria in different resources not named the same way
* Is the expected behavior of the search criteria clear or, where multiple behaviors are possible, is the range of behavior obvious?
+
* Do search criteria fit the 80% - i.e. will most systems, taking into account the breadth of use of the resource) want to search by/support searching by this criteria?
 +
* Is the expected behavior of the search criteria clear or, where multiple behaviors are possible, is the range of behavior obvious? E.g. Do we know which element will be searched on, how the match will be done, etc.
  
 
===Examples===
 
===Examples===
Can review either XML or JSON (they're transforms of each other)
+
For background, read the Formats section
 +
 
 +
Look at the examples tab.  You can review either XML or JSON, but there's no need to review both (they're transforms of each other)
 +
 
 +
Things to evaluate:
 
* Do examples cover the breadth and depth of the complete scope of use of the resource
 
* Do examples cover the breadth and depth of the complete scope of use of the resource
* Do examples adequately exercise the elements (use all elements, show use of repetition, optionality, use of different types
+
* Do examples adequately exercise the elements (use all elements, show use of repetition, optionality, use of different types)
  
 
==July QA==
 
==July QA==

Revision as of 23:36, 12 April 2013

This page discusses how QA will be handled leading up to the first DSTU.

May QA

The following is the set of instructions for performing QA for May. Each reviewer will be assigned a single topic (see list of sections below) and will be asked to review all resources keeping in mind the criteria for that one topic.

General considerations

  • Try hard to look at every resource from the perspective you've been assigned. If you're not going to be able to do that, coordinate with the other reviewers for your "category" and try to ensure that each resource gets reviewed at least twice. (Coordinating early is wise.)
  • Review one or two resources in the first 24 hours so that you have a sense of what you need to do and how long it will take and so we can resolve any questions early in the process
  • Think about anything that would make the review process easier when we need to do the same thing again in July. (You're not on the hook for July, though we'd certainly welcome your participation again.)
  • If you come up with other review criteria ideas, pass them back to one of the FMG co-chairs

Resource descriptions

This is the text prior to the UML diagram. (Ignore the stuff in pink that describes "status".) It also includes the text following the Bindings and Constraints section (and preceding the Search Criteria section). In other words, it's all the descriptive text presented in paragraphs.

Things to evaluate:

  • The resource resource boundaries should be clearly described in the first paragraph or two - it should be obvious what's in and out of scope for the resource
    • Are related resources identified with the boundaries are clearly identified such that there's no overlap? (It may not be clear what potential overlaps exist until you've had a chance to look at other resources, so keep this at the back of your mind about previous resources as you're reviewing new ones.)
  • Is the scope unnecessarily restrictive? (e.g. type of care, animal vs. human, etc.)
  • Is content appropriately targeted as "what do implementers need to know?" (We don't want information targeted at modelers, clinicians, etc. that's not relevant to implementers.)
  • Is content organized consistently across resources? E.g. Are sections that cover the same sorts of things named the same way? Are some resources missing sections they probably should have? Is content presented in roughly the same order?
  • Anything else about the organization or presentation that would tend to lead to a negative ballot?

Resource elements

Look at the XML representation (with the short descriptions), not the UML diagrams. If necessary, click on an element name to see the full definition.

Things to evaluate:

  • Are names clear (intuitive) and consistent (following same patterns both within and across resources)?
  • Do the choice of types allow for null flavors if appropriate/relevant?
  • Is nesting appropriate (no unnecessary levels, appropriate grouping of repeating/optional elements)

80%

Look at the XML representation (with the short descriptions), not the UML diagrams. If necessary, click on an element name to see the full definition.

Things to evaluate:

  • Do the list of elements seem intuitively appropriate as "part of the 80%" for the broad scope of the resource? I.e. would most implementers (taking into account the breadth of scope for the resource) use this element?
  • Where an element doesn't pass the "intuitive" sniff-test, are there clearly documented requirements and mappings supporting the element's inclusion?
  • Anything that seems missing?
  • Do the provided mappings demonstrate validation against a variety of existing specifications?

Constraints

For background, read the "Cardinality" and "Constraints" of the "Resource Format" section

Look at the XML representation (with the short descriptions)

Things to evaluate:

  • Are minimum cardinality 1 elements truly necessary? I.e. is it not possible to have a sensible instance in any circumstance with the element missing?
  • Is mustUnderstand set properly - element influences interpretation of other elements

Look at the "Constraints" section underneath the terminology bindings section

Things to evaluate:

  • Are there constraints that ought to be enforced missing?
  • Is the constraint language clear? Is the rationale obvious?
  • Is the constraint too tight? Will it work in all implementation environments, situations of "partial knowledge", edge cases?

Vocabulary

For background, read the Codes and Terminology section

Look at the terminology bindings section. (You'll need to click on the links to see the list of codes)

Things to evaluate:

  • If a binding is identified as "required", is it reasonable to expect *all* implementations to manage with only the provided set of codes and to not have issues mapping their existing content?
  • Where value lists have been provided, are all concepts clearly mutually exclusive, or if not, defined in a hierarchy in which all siblings are mutually exclusive?
  • Are we inventing new codes where appropriate codes are already defined in existing code systems?
  • Are code descriptions clear - will implementers understand what the codes mean

Search Criteria

For background, read the "Search Parameters" sub-section of the REST section

Look at the "Search Criteria" section at the bottom of the resource

Things to evaluate:

  • Are search criteria consistently named and defined across resources?
    • E.g. Are same criteria in different resources not named the same way
  • Do search criteria fit the 80% - i.e. will most systems, taking into account the breadth of use of the resource) want to search by/support searching by this criteria?
  • Is the expected behavior of the search criteria clear or, where multiple behaviors are possible, is the range of behavior obvious? E.g. Do we know which element will be searched on, how the match will be done, etc.

Examples

For background, read the Formats section

Look at the examples tab. You can review either XML or JSON, but there's no need to review both (they're transforms of each other)

Things to evaluate:

  • Do examples cover the breadth and depth of the complete scope of use of the resource
  • Do examples adequately exercise the elements (use all elements, show use of repetition, optionality, use of different types)

July QA

  • Everything we did in May, plus check for grammar, spelling, consistent language, valid links, etc.

Pass DSTU

Before a resource can be approved as DSTU:

  • At least one of the reference implementations must fully support the resource
  • The test suite must fully support the resource