201901 Bulk Data
Track Name
Bulk Data Access
Track Overview
Please review:
Please fill out:
Bulk Data API reference implementations:
- SMART Bulk Data Server (NodeJs) (online demo)
- SMART Sample command line client (NodeJs)
- Other implementations
Submitting WG/Project/Implementer Group
FHIR-I
Justification
Argonaut is taking up Bulk Data and Backend Services Authorization as a core project this year. Goals include:
- Ecosystem outcome expected to enable many specific use case/business needs: Providers and organizations accountable for managing the health of populations can efficiently access to large volumes of information on a specified group of individuals without having to access one record at a time. This population-level access would enable these stakeholders to: assess the value of the care provided, conduct population analyses, identify at-risk populations, and track progress on quality improvement.
- Technical Expectations: There would be a standardized method built into the FHIR standard to support access to and transfer of a large amount of data on a specified group of patients and that such method could be reused for any number of specific business purposes.
- Policy Expectations: All existing legal requirements for accessing identifiable patient information via other bulk methods (e.g., ETL) used today would continue to apply (e.g., through HIPAA BAAs/contracts, Data Use Agreements, etc).
Proposed Track Lead
Dan Gottlieb and Josh Mandel (Connectathon_Track_Lead_Responsibilities)
Expected participants
- Dan Gottlieb
- Grahame Grieve
- Josh Mandel
- Cerner
- CARIN health alliance
- Ken Kawamoto (along with CDS Hooks)
Roles
- Data Provider: provides data in the manner specified by the bulk data API
- Data Consumer: consumes data in the manner specified by the bulk data API and displays/processes the data
Scenarios
The bulk data track is divided into the following scenarios:
- Targeted bulk data export, open server without security
- Full bulk data export, open server without security
- Secured bulk data export using SMART (backend services specification) - this is a focus area for this Connectathon
Scenario 1: Targeted Bulk Data Export (Open Endpoint)
See https://github.com/smart-on-fhir/fhir-bulk-data-docs for a description of the workflow.
Export targeted by Group, with optional state-date and resource-type filters.
Action
1. Data Consumer issues one or more of the following requests:
GET [base]/Group/[id]/$export?_outputFormat=ndjson Accept: application/fhir+json Prefer: respond-async
GET [base]/Group/[id]/$export?_outputFormat=ndjson&_since=[date-time]&_type=[FHIR Resource Type],[FHIR Resource Type] Accept: application/fhir+json Prefer: respond-async
2. Data Provider responds with a location for progress updates
3. Data Consumer requests a progress update
4. Data Provider responds with the operation's interim status (optional)
5. Data Provider responds with links to the generated data files
6. Data Consumer requests each of the generated files
7. Optionally, Data Consumer may ETL and process these files.
Scenario 2: Full Bulk Data Export
This request is for export across the entire Patient population, rather than within a specific group.
Action
1. Data Consumer requests a bulk data export for all patient data
GET [base]/Patient/$export?_outputFormat=ndjson&_since=[date-time]&_type=[FHIR Resource Type],[FHIR Resource Type] Accept: application/fhir+json Prefer: respond-async
2. Data Consumer requests a bulk data export for all data
GET [base]/$export?_outputFormat=ndjson&_since=[date-time]&_type=[FHIR Resource Type],[FHIR Resource Type] Accept: application/fhir+json Prefer: respond-async
Scenario 3: Secured Bulk Data Export (SMART Backend Services Protected Endpoint)
Action
1. Data Consumer registers itself with Data Provider and obtains an access token as described in the SMART (backend services specification)
2. Data Consumer and Provider follow the workflows described in Scenario 2 with the addition of an authorization header in each request. If the requiresAccessToken
key in the final async response is not set to true
, the Data Consumer should not include the authorization token in the file download requests.
TestScript(s)
This is an API extension, and will require extensions to the test script resource in order to be tested
Security and Privacy Considerations
- Obviously, access to APIs like this in production require both authentication and consent
- Step 3 tests out application authentication
- For now, it is assumed that consent is managed elsewhere, though extensions may be added to the stream for this (see [[1]])
- Audit: For now, it is assumed that applications will audit the initial FHIR retrieval, and a smart on fhir login, but there are no rules about that
- The
requiresAccessToken
key is a proposal to support both servers that use SMART authentication to secure the generated files and those that leverage other techniques (eg S3 signed URLs).