In an elegant study by de Leval et al. [7], a table of hypothetical perioperative scenarios with a various types of errors in the cardiothoracic operating environment are presented, in a didactic way. The authors, outline the fact that error detection should be the first step in error handling; moreover, that there is a suggestive evidence on the direct influence of the number of minor events per case and their impact to adverse outcomes.
The Northern New England Cardiovascular Disease Study Group is a learning collaborative, that has completely devoted to improving performance of cardiac surgery. By using the tools that we have alluded to, albeit focusing on quantitative outcome data they have been able to produce the best outcomes across the entire 12 hospital network (Full publication list at: http://www.nnecdsg.org/pub_lit_2.htm).
Furthermore there are a number of publications whereby various authorities have done an extraordinary amount of work on improving adult and pediatric cardiac surgery performance and should be cited here [8, 9]. The authors summarize the lessons that have been learned about critical incident and near-miss reporting in other high technology industries that are pertinent to cardiac surgery.
Critical incident and near miss reporting that is based on human error taxonomies is in its infancy in the field of cardiac surgery [8]. However, monitoring near misses [7] can provide early indication of deterioration in surgical performance. Furthermore it is important to outline that the hospital with the highest mortality rate [8] also had a high failure-to-rescue rate, suggesting that there were problems in the management of difficult complications.
Failure to rescue a patient might therefore be an appropriate measure of “inadequate organizational performance”. A “Centralized cardiac registry” of major events and near misses, whereby incidences in cardiac theatres should be reported and should be used as “learning examples for avoidance” may become a helpful tool.
In a series of 24 successful operations Catchpole et al. [10] 366 failures when check lists, notes and video recordings were employed. Interestingly, skill, knowledge and decision making failures were only a small percentage of the failures; furthermore Longer and more risky operations were likely to generate a greater number of minor failures than shorter and lower risk operations
The same authors [11] by using a validated scale adapted from research in aviation they looked at the ability of a team to work safely; they concluded that decreasing the number of minor problems can lead to a smoother, safer and sorter operation.
Schraagen et al. [12] in an elegant report and rigorous work they trained human factors observers to observe and code the non-routine events and teamwork from time of arrival of the patient into the operating room to the patient handover in the intensive care unit. The authors concluded [13] that their suggested trained model it is ideal to explore team performance.
Using trained human factor observers [14] 40 paediatric cardiac cases were observed using both quantitative and qualitative measures. The important results of this study showed that surgeons displayed better teamwork during complicated procedures and also that more procedural non-routine events were associated with more complicated postoperative course.
Bognar et al. [15] using the power of a survey amongst paediatric cardiac surgery team members found that Staffing levels, equipment availability, production pressures, and hectic schedules were concerns. More interesting responders confessed that guidelines and policies were often disregarded.
It has wrongly been perceived by few [16], that surgical skills are innate aspects of ones personality; they can neither be taught nor acquired. In this context, we should differentiate between ‘innate ability or aptitude’, which an individual is born with and brings to particular tasks, and ‘skill’, in execution, which is acquired by training and reinforcement. Furthermore, whilst some individuals seem to be able to acquire these skills easier, many others could have these skills improved by lengthier training. On their own, however, knowledge or skills are not enough. For example, there are trainee surgeons who are well informed on the surgical literature but are less than adequate in the operating room. Contrary, there are surgeons whom are technically expert but consistently fail to get the results that would be expected from such expertise.
Surgeons with the “right knowledge and technical expertise” get better outcomes because they operate on the right patients the right time, continue to perform under stress and they manage to successfully harness the support of a multidisciplinary team to get the best results. It is well known that Crew resource management skills (how to manage/guide the theatre team) are particularly important for cardiac surgeons. This is one of the WHO checklist emphasis points.
It has been hypothesized [17] that only 25% of the important events, which occur during a surgical procedure, are related to manual-technical skills and that 75% relate to decision-making (especially during crises), communication, teamwork and leadership. Other HF, which are important in surgical practice, include self-awareness (i.e. insight), conflict resolution and error management. For example ‘task conflicts’ the “who, what, when and why” between the various theatre teams should be settling easily without escalation to an argument.
Causes of accidents in aviation industry are primarily related to deficiencies in nontechnical skills, rather than a lack of technical expertise. Flin et al. [18] in an elegant review elaborates on Crew Resource Management (CRM) courses as an educational tool to improve aviational safety; The four primary categories subdivide into two social skills (Co-operation; Leadership and Management) and two cognitive skills (Situation Awareness; Decision Making). Lack of nontechnical skills, have also been studied during surgical procedures, with great interest [13].
In this editorial we are projected an idea based in numerous and very important research in the topic of Human Factors. We are proposing a local and also a national registry of “events”. That could be subdivided as per “severity of the events”. In addition, we are taking into consideration simple reporting mechanisms-questioners, together with real perioperative data acquisition. That may enable us to scrutinize surgical practice and also test the implementation and validity of cardiac surgical protocols.
Limitations of applying “various systems” in clinical practice
Under-reporting [19] of incidents is a limiting factor that unfortunately it relates to the old notion of “a blame culture” in medicine; it is inappropriate, when junior doctors do not report incidents [20] because they fear they will be blamed. Equally it is wrong when they are blamed by senior colleagues; under the circumstances the team should be united to investigate and learn from the errors.
The obligation to name and define the role of the person reporting the incident can be a restrictive factor for reporting; it is apparent that confidentiality encourages an easier reporting procedure and probably should be implemented when possible. Moreover, the retrospective nature of reporting incidences further necessitates their need for validation and accuracy.
Clear definitions of [8] near misses in cardiac surgery is complicated by the need to distinguish between those events against serious peri-operative and post- operative complications. This can only be achieved when the human factors observers are trained and familiar with the procedures under investigation. With other words standardized training and calibration of observers would improve the data collection.
The capture of adverse actions and events are subjective and depend on the observers training and education; therefore, depicting accurately data in this respect could be limiting by interpreter reliability. Moreover, there are many technical challenges when video taping surgical teams such as logistics, ethics and interpretation [12]. Weaknesses in using video for data include lengthy video review processes, poor audio, and the inability to adequately analyze events outside the field of view [21].
The development and implementation of a system for measuring technical performance in the operating theatre is difficult. The challenges of grading the technical expertise, has been addressed by Karamichalis et al. [22]. An individual practitioner’s surgical performance includes the technical domain and other nontechnical skills such as cognitive flexibility, decision-making etc.
Technical performance, is defined as “the adequacy of the surgical anatomic repair intended,” and as per Larrazabal et al. [23] intraoperative technical performance is one of the most important, if not the most important, parts of the therapeutic process and determines postoperative outcomes. None of the currently available quality monitoring tools measures technical performance; the authors created and validated a scoring tool in this respect; the limitations of their tool is however due to the fact that in their model “rating surgical adequacy” is based on ECHO, which creates the biases of the operator dependent technique.
Lastly, using models for reducing errors may be part of the solution; as per Auroy et al. [24] risk assessment and control require analysis of both outcomes and process of care.
We admit into several limitations of this report. The model reported here is more of a zealous and enthusiastic project, in an attempt to capture more accurately, the already well defined many aspects of human factors in the cardiothoracic environment.
The first attempt of implementing a robust model was conceived by our team, following few years of using Briefing and debriefing in the cardiac operating room, in a manner similar to the model reported by Papaspyros et al. [25] with satisfactory outcomes (unpublished results).
We need to clarify here, that we conceived the idea of the RECORD model in an attempt to put more variables together in a continuous fashion. With other words, we would try to compile a “feedback approach” by the staff involved in the theatre environment together with the “specialists reporting approach” of the human factor observers in a stepwise fashion.
We would also like to confess, that we are still working to make the questioners simple and reproducible, for easy use.
So overall, this is a preliminary idea that we are working to materialize.
Lastly, in this editorial we hypothesized that putting more variables together we will be able to overcome some of the limitations to human factor research and analysis of performance assessment; although we just initiate implementing our model in clinical practice we hope that we will be back with real world data analysis in order to be able to depict and act upon clusters of latent errors before they become active. We are hoping to show in a subsequent study by piloting this model, that we will be able to support the basic idea of the model (more variables, more feedback, less bias) for others to emulate.