This wiki has undergone a migration to Confluence found Here
<meta name="googlebot" content="noindex">

Proposal for analyzing reported implementation problems

From HL7Wiki
Revision as of 19:07, 16 December 2009 by Danelson (talk | contribs) (New page: == Proposed method for analysing reported implementation problems with the HL7v3 XML ITS == === Introduction === The ITS WG at the Sept 09 WGM initiated a project linked to SAEAF to anal...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Proposed method for analysing reported implementation problems with the HL7v3 XML ITS

Introduction

The ITS WG at the Sept 09 WGM initiated a project linked to SAEAF to analyse problems reported by implementers concerning the HL7v3 XML ITS.

A range of implementation problems have been reported to the ITS WG, from different projects that have different aims, scope and approaches to implementation. In advance of analysis it is not clear what methods, processes or artefacts in HL7 may be contributing to the reported problems, or indeed whether the various problems reported are problems of implementability in principle or arise more from particular approaches to implementation.

This document describes a candidate method for analysing the reported problems to determine causal factors, appropriate mitigations, and associated costs, benefits & drawbacks.

Note on sources: The method outlined below draws on two techniques from software and systems safety: Hazards and operability analysis (known as HAZOP) for analysing outcomes, and the “Why-because analysis”(WBA) method of assessing causal factors. HAZOP is concisely described in Nancy Leveson’s standard text “Safeware: System Safety and Computers” (Addison-Wesley 1995); see Peter Ladkin’s research group pages http://www.rvs.uni-bielefeld.de/ for WBA.

Method

In order to disentangle the reported problems into a structured set of causal factors and mitigations, a 4-step analysis method linked to SAEAF is proposed, as follows: a)Gather input: List the problems as reported together with any measures taken to counter the problem (mitigations). In addition, elicit from the ITS WG and other HL7 sources any additional problems envisaged or encountered with the ITS. b)Stage 1 analysis: Analyse the reported and envisaged problems to identify specific problematic outcomes arising in the context of planned or actual implementation. c)Stage 2 analysis: For each outcome, determine its context in the SAEAF, and identify candidate causal factors. d)Stage 3 analysis: For each causal factor, map it into a “box” in the SAEAF, and identify candidate mitigations, from problem reports and other sources. e)Stage 4 analysis: For each candidate mitigation, map it into a “box” in the SAEAF, and determine who needs to do what to implement the mitigation. State its intended benefit (to whom) and its expected costs and drawbacks (to whom).

Gathering input

The principal and motivating source of problems for analysis is reported problems from implementers. However, it is also important to elicit actual and potential problems from the designers of the XML ITS, and other potentially useful contributors in HL7. HAZOP has a good track record of usefulness as a structured method for thinking through problems emerging in operational systems. The following table provides an adaptation of the standard HAZOP questions (as used in engineering design analysis for system safety) to a healthcare information systems design context1.

Guiding question Meaning None? The intended result is not achieved and nothing else happens More? More of any relevant quantity then there should be. Less? Less of any relevant quantity then there should be. As well as? The actual result includes the intended result but contains additional unintended material or effects Part of? Only some of the design intention is achieved Reverse? The logical opposite of what was intended occurs Other than? The intended result is not achieved and something completely different happens

Stage 1 analysis: Classify the reported problems into types of outcomes

The objective of Stage 1 analysis is to analyse out a list of specific outcomes from problems reported and envisaged from many different contexts and point of view.

For each reported and envisaged problem, identify situations (outcomes) arising in the context of planned or actual implementation. Example outcomes could be: “poor performance from industry-standard integration tooling”; “developers find the XML structure is difficult to understand and/or cumbersome to use”. At this stage it is important to describe the outcome at face value and not in terms of implied causes or reasons.

The expectation is that each reported or envisaged problem will yield one or more outcomes, and that common outcomes will emerge across a range of reported and envisaged problems. It is quite likely that some outcomes will appear to be in conflict. It is important NOT to try & resolve any apparent opposites or contradictions at this stage, since conflicting outcomes can be very revealing in causal factor analysis.

Stage 2 analysis: For each outcome, identify candidate causal factors.

The objective of Stage 2 analysis is to analyse out a list of specific causal factors for each outcome emerging from Stage 1 analysis. To help in this process, for each outcome, determine its context in the SAEAF, and look for candidate causal factors in particular that belong to SAEAF “boxes” that are neighbouring, or predecessors in the usual flow of development and implementation.

The key question for causal factor analysis is “Why did this occur? …. because …”. It’s important to keep an open mind when looking for causal factors, and in particular not to rule out any particular kind of causal factor before seeing if it is actually relevant. Some outcomes will yield chains or trees of causal factors, & common sense is needed to judge how much structure and dependency to capture for the analysis, & where to stop in chasing down a particular sequence. Also, it is important to distinguish between errors committed by people and by computers, for example the tendency of complex record component data format(s) to introduce errors in middleware XML processing, and error-proneness in human developers, should be noted as two separate factors.

The expectation is that each outcome will yield several causal factors, at different places in the SAEAF, and that common causal factors will emerge across a range of outcomes. It is quite likely that outcomes that initially look similar will yield different patterns of causal factors.

Stage 3 analysis: For each causal factor, identify candidate mitigations.

The objective of Stage 3 analysis is to collect and devise candidate mitigations for each causal factor emerging from Stage 2 analysis.

The initial gathering of input will include some accounts of actual or suggested corrective action, and these will suggest some mitigations (however should not be adopted without close examination to determine which causal factor is being addressed). The mapping of causal factors to the SAEAF framework will hopefully also be helpful in thinking through suitable candidate mitigations.

Stage 4 analysis: For each candidate mitigation, determine who needs to do what and identify benefits, costs and drawbacks

The objective of Stage 4 analysis is to gain a good understanding of the benefits, costs and drawbacks of the candidate mitigations emerging from Stage 3 analysis. In particular, it is likely that some candidate mitigations cause costs and drawbacks related to one part of the SAEAF in order to deliver benefits in another, & this needs to be brought out clearly. For each candidate mitigation, the kinds of people and organizations experiencing the respective benefits, costs and drawbacks need to be identified with particular care.

Results

The result of completing this method is a well-founded and thorough understanding of the kinds of problems arising or likely to arise, together with how they arise, how they might be addressed, and the costs, benefits and drawbacks of various possible tactics. The objective of the completed analysis is to provide a good basis for decision making, rather than to make decisions.

AMW 2009-09-28