Difference between revisions of "FHIR QA Guidelines"
(Created page with "This page identifies the guidelines used as part of the QA process. Not all of these will necessarily be evaluated as part of the QA process. As well, we need to define who is ...") |
m |
||
Line 1: | Line 1: | ||
+ | {{FHIR Discussion Page}} | ||
+ | [[Category:Active FHIR Discussion]] | ||
+ | |||
This page identifies the guidelines used as part of the QA process. Not all of these will necessarily be evaluated as part of the QA process. As well, we need to define who is responsible for verifying these. Ideally, most QA would be required to be performed by authoring committees with a smaller set of criteria as well as spot-checks of other criteria performed by FMG. We may also want to differentiate what gets done for different levels of ballot. As well, QA can be focused on areas that have changed. | This page identifies the guidelines used as part of the QA process. Not all of these will necessarily be evaluated as part of the QA process. As well, we need to define who is responsible for verifying these. Ideally, most QA would be required to be performed by authoring committees with a smaller set of criteria as well as spot-checks of other criteria performed by FMG. We may also want to differentiate what gets done for different levels of ballot. As well, QA can be focused on areas that have changed. | ||
Revision as of 20:29, 12 November 2012
This page identifies the guidelines used as part of the QA process. Not all of these will necessarily be evaluated as part of the QA process. As well, we need to define who is responsible for verifying these. Ideally, most QA would be required to be performed by authoring committees with a smaller set of criteria as well as spot-checks of other criteria performed by FMG. We may also want to differentiate what gets done for different levels of ballot. As well, QA can be focused on areas that have changed.
Contents
QA Steps
Automated
These processes are currently handled by the build process (though someone needs to view the build log and ensure it is clean)
Need to confirm that the build process actually does all of these
- All examples & fragments are schema-valid and schematron valid
- All resource definitions and profiles are valid against their schemas & schematrons + additional rules
- All links resolve in the HTML
- All coded datatypes have bindings
- Fixed values only exist for simple types
- All OCL constraints compile
- Definitions, etc. only end with periods when they ought to
- UML views of everything (including data types) agrees with definitions
- sid values are legal. All non-sid/guid/oid ids are flagged as warnings unless part of a pre-defined example space
Automatable
These are processes that must be reviewed manually at the moment bug could be handled in an automated fashion with appropriate enhancements to the build process
- Ensure all RIM mappings are "legal"
- Ensure RIM mappings don't collide/overlap
Manual
(Some of these can be focused only on those resources & sections that have changed from prior release)
- Tooling validation (validates that the build tooling is working correctly - only needs checking when build process changes)
- Ensure all content that's supposed to make it into the book form actually does
- Ensure all content from the website that doesn't appear in the book form appears in a secondary form for review
- Content validation
- Ensure the build runs successfully with no warnings
- Test that the xpath assertions for Schematrons are valid using Saxon SA
- Formal process
- Do we have a PSS and resource request in place for all resources?
- Do we have mappings for the "source" specifications used to determine/validate 80%?
- Technical review
- Place both forms into MS Word and run grammar & spelling checks (U.S. English)
- Ensure style guide is followed for use of formatting
- To be defined. Includes: when to use bold, italics, capitalization, hyperlinks, color, ordered vs. unordered lists, sections
- Can we steal from w3c or someone?
- Text content review
- Ensure all definitions for code sets are mutually exclusive (and comprehensive)
- Ensure statuses on resources & profiles are accurate for ballot
- Ensure definitions are non-tautological and clear
- Ensure definition, rationale & notes are properly split
- Ensure definitions include examples when appropriate
- Ensure text is clear and reads well, with references to other topics when appropriate
- Ensure that constraints (cardinality, vocabulary, invariants, etc.) do not constrain extensibility more than necessary to allow safe base interoperability
- Check where Conformance = Required, minimum cardinality = 1
- Are mappings valid against source specification