This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

FHIR Colloborative Review Implementation Guide

From HL7Wiki
Jump to navigation Jump to search

Introduction

This Implementation Guide describes a how a FHIR Server can provide a set of services that allows for a team to review a resource, comment on it's contents, and then provide a response by a collator, following which, the original resource will be updated.

This Implementation Guide is intended to be suitable for supporting the collaborative review process on conformance resources (profiles, extensions, value sets, translations, conformance statements etc).

Note: The process that this IG supports may also be useful and applicable to other kinds of FHIR resources (e.g. clinical review of a medications list), or some variant of the process may be needed, but this is out of scope of the first round of this implementation guide.

Overview

Fundamentally, the review process follows this general pattern:

  1. a user (the 'editor') initiates a review cycle on a 'target resource'
  2. other users ('reviewers') are invited to participate in the review
  3. reviewers are presented with a form that allows them to comment on the contents of the resource
  4. the editor is notified of reviewer's progress in the review
  5. the reviewers and the editor is able to see a summary of user comments on the review, and respond with a comment of their own
  6. reviewers are notified of the editor's comments, and all users are able to see the final outcome
  7. the editor updates the target resource

FHIR Resources

The following FHIR resources are used in the review process:

  • Practitioner- all the registered users (see below) involved in the process are exposed as Practitioner resources that represent their possible roles to the system
  • Group - the Group resource allows users to identify their review interests
  • Encounter - each review cycle is represented by an encounter
  • Questionnaire - generated to collect comments from the reviewer or editor
  • QuestionnaireResponse - records the input made be reviewer or editor
  • StructureDefinition & ValueSet - used by the server when generating questionnaires for steps 3 and 5
  • Communication/CommunicationResponse - used to track invitations and notifications through the process
  • AuditEvent - used to record all the steps in the process

The heart of a good Collaborative Review server is the ability to generate good questionnaires from the underlying resource definitions and the target resource.