Skip to main content
Lippincott Open Access logoLink to Lippincott Open Access
. 2017 Apr 11;92(11):1607–1616. doi: 10.1097/ACM.0000000000001674

Impact of Patient Affect on Physician Estimate of Probability of Serious Illness and Test Ordering

Jeffrey A Kline 1,, Dawn Neumann 2, Samih Raad 3, David L Schriger 4, Cassandra L Hall 5, Jake Capito 6, David Kammer 7
PMCID: PMC5662157  PMID: 28403005

Supplemental Digital Content is available in the text.

Purpose

The authors hypothesize patient facial affect may influence clinician pretest probability (PTP) estimate of cardiopulmonary emergency (CPE) and desire to order a computerized tomographic pulmonary angiogram (CTPA).

Method

This prospective study was conducted at three Indiana University–affiliated hospitals in two parts: collecting videos of patients undergoing CTPA for suspected acute pulmonary embolism watching a humorous video (August 2014–April 2015) and presenting the medical histories and videos to clinicians to determine the impact of patient facial affect on physicians’ PTP estimate of CPE and desire to order a CTPA (June–November 2015). Patient outcomes were adjudicated as CPE+ or CPE− by three independent reviewers. Physicians completed a standardized test of facial affect recognition, read standardized medical histories, then viewed videos of the patients’ faces. Clinicians marked their PTP estimate of CPE and desire for a CTPA before and after seeing the video on a visual analog scale (VAS).

Results

Fifty physicians completed all 73 videos. Seeing the patient’s face produced a > 10% absolute change in PTP estimate of CPE in 1,204/3,650 (33%) cases and desire for a CTPA in 1,095/3,650 (30%) cases. The mean area under the receiver operating characteristic curve for CPE estimate was 0.55 ± 0.15, and the change in CPE VAS was negatively correlated with physicians’ standardized test scores (r = −0.23).

Conclusions

Clinicians may use patients’ faces to make clinically important inferences about presence of serious illness and need for diagnostic testing. However, these inferences may fail to align with actual patient outcomes.


Our hypothesis originates with the belief that patient facial expressions strongly influence clinicians’ initial interpretation of illness acuity. In particular, we sought to test whether patient facial affect contributes to the clinician’s estimate of the probability of an immediate threat to life in the emergency care setting. For at least three reasons, patients with suspected acute pulmonary embolism (PE) undergoing computerized tomographic pulmonary angiogram (CTPA) scanning provide a generalizable and classical model to study the influence of patient affect on provider suspicion of acute illness.13 First, patients with PE have symptoms (e.g., chest pain, shortness of breath, fatigue) that are also caused by numerous other conditions. Thus, for a study concerning the ability of clinicians to rapidly distinguish sick from not sick patients, studying patients undergoing CTPA scanning has an additional value beyond the diagnosis and exclusion of PE because clinicians know that diagnoses besides PE (e.g., pneumonia) are actually more commonly discovered on CTPA scans.49 Second, PE offers intriguing oppositional influences for the study of clinical judgment. On the one hand, PE is still often missed, remains the third leading cause of cardiovascular death in the United States, and can kill suddenly.10 On the other hand, clinicians in emergency departments tend to overtest patients with possible PE, leading to negative consequences.1115 Reasons for overtesting include medicolegal concerns, perceived normative behaviors of peers, patient expectations, and the belief that the culture has zero tolerance for error.11,1618 These influences, together with medical data customarily available with all patients such as age, chief complaint, past medical history, and vital signs (which we refer to as the standardized medical history), coalesce at the bedside where clinicians see the patient and make test-ordering decisions.

Although multiple structured pretest probability systems exist to estimate probability of PE, prior work has found that clinicians use implicit judgment to make a sick or not sick determination, especially in the first minute of meeting a new patient.1924 To our knowledge, no study has tested whether viewing a patient’s facial affect changes the clinician’s estimate of the probability of significant disease or desire to order a diagnostic test. Understanding how facial affect may impact the way clinicians formulate their probability estimates and make test-ordering decisions is important because sicker patients have been found to manifest different affect than those who were less sick.25 In particular, facial expressions of disgust and anger appear to represent more serious illness.25 Moreover, in an earlier study, we found that, among patients undergoing CTPA scanning, physician recall of patients smiling was surprisingly more common among patients with PE than among patients without PE.26 Accordingly, we designed the present study to measure the direction and magnitude of the impact caused by patient affect on physicians’ pretest probability estimate of a cardiopulmonary emergency (CPE) and desire to order a CTPA for that patient. Additionally, we aimed to compare this impact with results of the clinicians’ performance on a standardized test of facial affect recognition, as well as with their training level, specialty, and gender.

Method

This was a prospective study conducted at three Indiana University–affiliated hospitals (IU Health University Hospital, Indiana University Health Methodist Hospital, and Sidney and Lois Eskenazi Hospital). The study had two parts: (1) collecting videos of patients undergoing CTPA for suspected PE (from August 2014 to April 2015) and (2) presenting the standardized medical histories and videos to clinicians to determine the impact of patient facial affect on physicians’ estimate of the probability of CPE and desire to order a CTPA (from June to November 2015). This study was approved by the Indiana University School of Medicine Institutional Review Board, and all participating patients and physicians signed an informed consent form. The rationale, detailed methods, and protocol for this study are described in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A442). Figure 1 presents a flow diagram summarizing the experiment.

Figure 1.

Figure 1

Flow diagram to show chronological order of the experimental methods and measurements, used to study the impact of patient facial affect on physicians’ estimate of the probability that a patient has a serious illness and desire to order a computerized tomographic pulmonary angiogram for that patient, at three Indiana University–affiliated hospitals, August 2014–November 2015.

Patient videos

After an order for a CTPA was entered but before the radiologist interpretation was available, a member of the study team (C.L.H. or J.C.) video recorded patient faces, using a computer laptop (see below), while patients viewed four still photos from the International Affective Picture Set (IAPS) and a humorous 26-second Best of America’s Funniest Home Videos video clip (showing a cat flipping after being taunted by a bird and an excited dog falling in a pool). The purpose of showing them the IAPS and the video clip was to trigger a change in facial affect that would distinguish sick from not sick patients based on the assumption that not sick patients would have a stronger response (e.g., smile) to these than sick patients, who would be preoccupied with their illness.

Patients were video recorded while in semi-Fowler position, using the camera of an 11.5-inch-screen MacBook Air, 2010, preprogrammed in Mac OS X Maverick, version 10.9.5 (Apple Inc., Cupertino, California), to demonstrate the presentation. The laptop was positioned approximately 18 inches in front of the patient while they watched the standardized visual stimuli. We obtained video images of 75 patient subjects whose clinical characteristics are detailed in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A442). Outcomes (CPE+ or CPE–) of these 75 patient subjects were determined by combination of a structured review of the medical record, supplemented by a telephone call to each patient, as adjudicated by the independent review of three authors (J.A.K., J.C., and S.R.) who were blinded to each other’s opinions and the patients’ faces. These clinicians used a previously defined explicit definition of CPE+ (including any emergent thoracic diagnoses that are commonly detected on CTPA that require immediate treatment to prevent imminent deterioration, including PE, pneumonia, aortic dissection or aneurysm, pneumothorax, and new thoracic or mediastinal mass with great vessel or airway compromise or cardiac tamponade).1,49 The final outcome of CPE+ required the agreement of at least two of the three clinicians.

Patient and clinician participants

Patients were undergoing CTPA scanning for suspected PE as part of standard care; patients participated from August 2014 to April 2015. Residents from any year were eligible, as were fellows and faculty with any number of years of experience; we attempted to obtain an equal distribution of residents, fellows, and faculty with an equal representation of emergency medicine and internal medicine. Physicians participated from June to November 2015.

Assessment of the impact of the patient facial affect on clinical suspicion of serious illness

The experiment was conducted as a survey in the REDCap electronic data collection system (Vanderbilt University, Nashville, Tennessee).27 The survey included the Diagnostic Assessment of Nonverbal Accuracy–Adult Faces (DANVA-AF), a standardized test of facial affect recognition,28 followed by standardized medical histories (see above), followed by the videos of patients’ faces as they watched the humorous video clip. We used the footage of patients watching the video clip since automated facial expression reading software, FaceReader, version 5.1 (Noldus, Leesburg, Virginia), indicated this stimulus elicited a greater change in patients’ facial affect than the IAPS. Physicians marked their estimated probability of CPE and desire for a CTPA on a visual analog scale (VAS) both before and after seeing the videos.

The DANVA-AF served as a measure of physicians’ ability to recognize emotions from facial affect. An example of what physicians viewed is found in Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A442).

After completing the DANVA-AF, physicians read the patient’s standardized medical history, which was prepared by consensus of two authors (S.R. and J.C., with oversight from J.A.K.). The content and importance of the medical history is justified by prior evidence showing that both emergency clinicians and novice clinicians rely on medical histories when generating diagnostic hypotheses.29,30 Then, physicians provided a numeric answer (see below) on a VAS in response to two questions. The first question asked, “What is the probability that this patient has a life-threatening disease process (e.g., myocardial infarction PE, aortic dissection, infection with sepsis, pneumothorax, etc.)?” This is referred to hereafter as the CPE VAS. The second question asked, “What is your certainty that you will order a computerized tomographic (CT) scan of the chest with intravenous (IV) contrast?” This is referred to hereafter as the CT VAS.

Physicians then watched the video of the patient’s face as the patient watched the humorous video clip and answered the same two questions again before moving on to the next patient’s medical history and video.

Within a month of completing the survey, the senior author (J.A.K.) performed a semistructured interview with the clinicians who completed all of the videos. The goal of the interview was to discover technical difficulties, time requirements, and number of sittings, and to use open-ended questions to assess the emotional requirements to complete the survey (e.g., “How difficult was this to accomplish? What was the hardest part?”).

Data analysis and sample size

One primary analysis was the measure ment of the clinician marks on two sets of VASs (whole numbers between 0% and 100%), where the first set was given after viewing only the medical histories and the second set was given after viewing the videos of the patients’ facial affects. The other primary analysis was descriptive and illustrated the impact of seeing the patients’ faces on each clinician’s VAS results through the use of waterfall and frequency plots. We used 95% confidence intervals (CIs) for differences to determine significance. The primary goal of these analyses was to describe the proportion of encounters where viewing the patient’s face changed the absolute pretest probability by more than 10%, representing the minimal clinically significant deflection. We chose an absolute change of 10% as a clinically significant marker because prior work has found that the results of CT scanning produce a minimum of 10% change in the belief of the primary diagnosis in emergency department patients with dyspnea.31 Recognizing the possibility of decision-making style differences between genders, we planned to stratify responses according to physician gender, as well as training level and specialty.32,33

The sample size was based on prior work with similar patients undergoing CTPA scanning using clinician-entered VAS data, which found that clinicians indicated a 16% (± 15%) change in their degree of certainty that a patient had a life-threatening condition after learning of the formal results of the CTPA scan.34 The sample size was predicated on detecting a mean absolute change of 10% difference (from before seeing a patient’s face to after seeing a patient’s face) with a standard deviation (SD) = 15%, α = 0.05, and β = 0.20. This required an estimated minimum sample size of 20. Because we wanted to compare differences between genders, we estimated that we would need a minimum sample size of more than 40 physicians (approximately 20 males and 20 females) with complete data.

Results

Patient participants

Of the 75 patients who were video recorded, 73 (97%) consented to having their videos used for this study. Of those 73 patients, 11 (15%) were adjudicated as having CPE+, resulting in 550 CPE+ cases (11 patients × 50 physicians), leaving 3,100 CPE− cases (62 patients × 50 physicians). The adjudicated diagnoses for the 11 CPE+ patients included PE (3 [27%]), pneumonia (4 [36%]), septic shock (2 [18%]), myocardial infarction (1 [9%]), and new acute heart failure (1 [9%]).

Clinician participants

We obtained informed consent and DANVA-AF scores from 179/179 (100%) clinicians. We sent over one dozen follow-up e-mails and personal communications to each of the 179 physician participants who consented, to encourage them to complete the entire survey, and we allowed time over holidays for resident participants. Even so, the fatigue rate was rapid and high: 86 (48%) physicians completed 10 patient videos, 75 (42%) completed 20 patient videos, 60 (34%) completed 40 patient videos, and 50 (28%) completed all 73 patient videos (see below). Those who rated all 73 patients required on average 3.5 (± 2.2) hours (based on recorded start and stop survey times). Among the original 179 clinicians, 67 (37%) were from emergency medicine, 98 (55%) were from internal medicine, and the remainder (14 [8%]) represented a variety of disciplines. The mean DANVA-AF score for the 179 physicians was 18 (SD = 3), which was lower than the predicted mean of 19 (SD = 3) based on normative populations aged 30 to 40 years. In the follow-up interviews, physician participants consistently described the survey as “tough” or “much harder than I thought,” “like working a shift,” with several describing it as “grueling.”

Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A442) provides a more detailed comparison of physician completers versus noncompleters. From here forward, the analysis focuses on data from the 50 completers.

Tabular and visual analysis to reveal underlying themes

Tables 1 and 2 present the mean CPE VAS and CT VAS values, respectively, from before seeing the patients’ faces (but after viewing their standardized medical histories) and after seeing the patients’ faces for the 50 physician completers. Among all 50 completers, the mean CPE VAS and CT VAS values before seeing a patient’s face were 45 (SD = 14) and 43 (SD = 16), respectively, and 36 (SD = 13) and 37 (SD = 14) after seeing a patient’s face. Further, the data presented in these tables suggest three main points: First, on average, clinicians rated the 73 patients as having a probability near the middle (50%) both before and after seeing the faces, but the large SDs showed the variability among physicians in either direction. Second, clinicians tended to rate the probability of CPE and need for a CTPA for patients who ultimately were proven to have CPE based on the adjudicated outcome higher when using only the patients’ medical histories than they did after seeing the patients’ faces. Third, regardless of CPE outcome, seeing the patient’s face on average lowered their CPE VAS and CT VAS values. These three findings were consistent across specialty, training level, and gender. To further illustrate this impact in detail, Figures 2A and 2B depict the median and interquartile range for the 50 physician completers’ ratings of the pretest probability of CPE after reading the standardized medical history but before seeing the video of the patient’s face and after seeing the video, respectively. Supplemental Digital Appendix 2 (at http://links.lww.com/ACADMED/A442) shows the same results, except for the desire for a CTPA.

Table 1.

Pretest Probability for Suspicion of CPE Prior to and After Seeing the Patient’s Facea,b

graphic file with name acm-92-1607-g002.jpg

Table 2.

Pretest Desire for a CTPA Prior to and After Seeing the Patient’s Facea,b

graphic file with name acm-92-1607-g005.jpg

Figure 2.

Figure 2

Plot of the visual analog scale (0%–100%) ratings for physicians’ estimate of the probability of cardiopulmonary emergency (CPE) after reading the standardized medical history but before seeing the video of the patient’s face (Figure 2A) and after seeing the video of the patient’s face (Figure 2B). Plots show the median (circles) and interquartile range (lines) (shown on the x-axis) ratings from 50 physician completers for each of 73 patients (shown on the y-axis). From an August 2014–November 2015 study at three Indiana University–affiliated hospitals of the impact of patient facial affect on physicians’ estimate of the probability that a patient has a serious illness and desire to order a computerized tomographic pulmonary angiogram for that patient. Abbreviations: PNA indicates pneumonia; MRSA, methicillin-resistant staphylococcal bacteremia; NSTEMI, non-ST-segment elevation myocardial infarction; AHF, acute heart failure; PE, acute pulmonary embolism; HCAP, health care facility acquired pneumonia.

Accuracy of physicians at recognizing sick faces

After seeing videos of CPE+ patients, clinicians increased their VAS ratings in 132/550 (24%) cases and decreased them in 407/550 (74%) cases. For CPE+ patients, the mean net change in CPE VAS (VAS after seeing patient’s face − VAS before seeing patient’s face) decreased only slightly by Δ = −2.8 (95% CI −1.5 to −4.0), as did the CT VAS (Δ = −0.8; 95% CI −2.6 to 1.0). Similarly, for the 3,100 CPE− cases, the mean net change in CPE VAS did not decrease by much (Δ = −2.5; 95% CI −1.4 to −3.5), nor did the CT VAS (Δ = −1.1; 95% CI −0.1 to −2.1). When each physician’s change in CPE VAS was treated as a diagnostic test for the adjudicated outcome of CPE+ or CPE−, the mean area under the receiver operating characteristic curve was 0.55 ± 0.15 (range: 0.32 to 0.85). Taken together, these data show, on average, that inferences about the presence or absence of CPE gleaned by looking at patients’ faces in this study were not significantly better than random assignment at predicting CPE outcome, when compared against the criterion standard from three blinded clinician reviewers who had access to clinical outcomes.

Magnitude and direction of change caused by seeing patient faces

We subtracted the VAS after seeing patient’s face from the VAS before seeing patient’s face for each of 3,650 case encounters (50 clinicians viewing 73 patients) for the net change in the CPE VAS and CT VAS. Figures 3A and 3B indicate that the faces of two CPE+ patients (both with pneumonia) caused a 15% increase in median VAS scores (true positive deflection for both CPE VAS and CT VAS), whereas the affect of one patient with PE caused an incorrect 15% decrease in median CPE VAS and a 9% decrease in median CT VAS.

Figure 3.

Figure 3

Plot of the net change in visual analog scale (0%–100%) ratings for physician’s (n = 50) estimate of the probability of cardiopulmonary emergency (CPE) (Figure 3A) and desire for computed tomographic pulmonary angiography (CTPA, Figure 3B) caused by seeing the patient’s (n = 73) face. For Figure 3A, the tallest bar in the middle indicates that for 35 patients, the face evoked zero change in probability estimate. At the far left and right of Figure 3A, seeing the faces evoked a −15% change in three patients and a +15% change in three patients. From an August 2014–November 2015 study at three Indiana University–affiliated hospitals of the impact of patient facial affect on physicians’ estimate of the probability that a patient has a serious illness and desire to order a CTPA for that patient. Abbreviations: PE indicates acute pulmonary embolism; HCAP, health-care-facility-acquired pneumonia; AHF, acute heart failure; PNA, pneumonia; NSTEMI, non-ST-segment elevation myocardial infarction; MRSA, methicillin-resistant staphylococcal bacteremia.

Seeing the patient’s face produced the a priori defined minimal clinically significant change in pretest probability of CPE (> 10% absolute change in VAS, either positive or negative) in 1,204/3,650 (33%) cases and desire for CTPA in 1,095/3,650 (30%) cases. The mean absolute change in pretest probability of CPE was 10 (SD = 44) and in desire for a CTPA was 9 (SD = 3). Regarding the direction of change, seeing the patient’s face increased (> 0% net change in VAS) the pretest probability of CPE in 1,277/3,650 (35%) cases and the desire for CTPA in 1,241/3,650 (34%). Seeing the patient’s face decreased (< 0% net change in VAS) the pretest probability of CPE in 1,679/3,650 (46%) cases and the desire for a CTPA in 1,497/3,650 (41%). For the remaining 19% and 25% of patients in each group, the face elicited 0% change in VAS.

Subgroup analyses

We explored whether physicians’ DANVA-AF scores correlated with the impact of seeing the video on physicians’ belief as shown by changes in CPE VAS and CT VAS by performing two sets of first-order regressions. The first measured the correlation of the DANVA-AF score with the raw change in VASs. This showed minimal and negative correlation between the DANVA-AF and the change in CPE VAS (r = −0.23) and CT VAS (r = −0.09). Similarly, the DANVA-AF score correlated minimally and negatively with each clinician’s diagnostic accuracy after seeing the patient’s face as assessed by the area under the receiver operating characteristic curve (r = −0.25 for CPE VAS and r = −0.10 for CT VAS). Thus, better scores on the DANVA-AF correlated neither with larger changes in pretest probability based on seeing the patient’s face nor with more accurate diagnosis after seeing the patient’s face.

Supplemental Digital Appendix 3 (at http://links.lww.com/ACADMED/A442) provides a visual representation of some of the information from Tables 1 and 2, plotting physician pretest probability of CPE and desire for a CTPA as a function of training level. The CPE VAS data did not show any major differences between specialties in terms of pretest probability of CPE before seeing the patients’ faces or in terms of change to CPE probability after seeing the patients’ faces. However, the data show that for CPE+ cases, residents had a relatively large, 11-point, overall higher pretest probability of CPE before seeing the patients’ faces, based only on the standardized medical histories, than did faculty (55 vs. 44, Δ = 11 [95% CI 4.4–14.2]), and less change after seeing the patients’ faces (54 vs. 40, Δ = 14 [95% CI 6.0–17.3]). This is consistent with findings from Schubert and colleagues,30 which showed that residents tend to overrely on the case history to generate diagnostic hypotheses. Prior work has suggested that women have higher experiential scores on psychometric testing of decision-making style, suggesting more reliance on a subjective decision such as affect interpretation.32 However, in Tables 1 and 2 and in plots not shown, our data did not show any clear difference in the impact of the face on pretest probability of CPE or desire for a CTPA between male and female physicians. The data presented in Table 2 suggest that emergency medicine physicians generally have higher pretest desire for CTPA than internal medicine physicians, residents had a much higher pretest desire for a CTPA than faculty, and seeing the patients’ faces had no significant overall impact on the mean value for the desire for a CTPA.

Discussion

We believe this to be the first quantitative measurement of the impact of patient facial affect on clinical decision making. Our data show that under these experimental conditions, looking at the patients’ faces deflected clinician belief in the presence of CPE by a clinically important absolute change of 10%, with one-third of estimates demonstrating a change > 10%—a magnitude similar to that evoked by learning the results of CT scanning. Similarly, after seeing the patient’s face, clinicians marked a > 10% change in their desire for CTPA scanning in 30% of patients. These findings show that visual exposure to patients’ faces modified physician belief about presence of CPE and desire for a CTPA significantly. However, when compared against the criterion standard from three blinded clinician reviewers who had access to clinical outcomes, it appears that clinicians are not much better than random assignment at interpreting the significance of patient faces as a predictor of outcome. For example, when clinicians viewed patient faces after reading the standard medical history, they lowered their estimate of disease probability equally in about one-third of cases for both sick and not sick patients. Clinician accuracy at using patient faces as a diagnostic tool was highly variable, ranging from an area under the receiver operating characteristic curve of 0.32 (worse than random) to 0.85 (better than random). Additionally, overall, we found clinicians were slightly below population norms for ability to recognize facial affect on the standardized DANVA-AF. Taken together, our findings raise the possibility that patient affect may contribute to physician cognitive error by often deflecting clinical suspicion wrongly. This is particularly concerning because our previous work suggests that patient affect variability, assessed using the structured facial action coding system of Ekman and colleagues,35 was blunted in sicker patients.25

Implications of this work are that sick patients can project a paradoxical facial affect that can mislead clinicians. Relevant to this point, we previously studied clinician perception of smiles among patients undergoing CTPA scanning; surprisingly, patients with PE were more likely to smile than patients without PE, and clinicians were more likely to indicate that patients with PE who smiled had an alternative diagnosis to PE, which was associated with a falsely low Wells clinical probability score for PE.26 The two CPE+ patients in the current study who appeared to produced false-negative decreases in physicians’ CPE VAS and CT VAS ratings both smiled during the video.

In addition to revealing that patient facial affects can influence physician belief patterns, at least in an experimental setting, we also show wide variability in physician accuracy at interpreting faces. We hypothesized that this ability might be correlated with performance on the DANVA-AF, but if anything, we found a negative linear correlation between DANVA-AF scores and the area under the receiver operating characteristic curve after seeing patients’ faces (r = −0.25 for CPE VAS). Explanations for this finding include that recognizing a sick face from a dynamic video, or in real practice, is probably different from recognizing emotional expressions from static faces (as with the DANVA-AF), and that the DANVA-AF does not evaluate recognition of disgust, which is an emotion commonly reported in sick patients.36 Additionally, human factors may explain variable clinician performance, including clinician level of attention, interest, and their mood when taking the survey. For example, variable physician attention to the patient’s face could influence medical decision making as well as the socioemotional interplay between the patient and physician, including perceptions of trust, empathy, and compassion.37

A primary limitation of this work was that the clinician interpretations of patient faces did not occur in the clinical environment. This study did not examine other aspects of patient affect, such as eye contact, or other aspects of assessing illness acuity, such as the appearance of respiratory distress. We chose a humorous video in part because we believe many clinicians do try to use humor to connect with patients. Moreover, the patient videos were of them watching a clip to elicit a positive emotional response, with the expectation that patients who were not sick would have a stronger positive emotional response to the video than sicker patients. However, some sick patients did have a positive emotional response, and consequently the stimulus may have provided a brief masking of the patients’ overall affective mood, which might have been more informative if observed naturally. It is important to note that expert intuition to decide on sick or not sick probability comprises many elements, including physician analysis of other nonverbal cues in addition to facial affect. We did provide each clinician with some context, in the form of a standardized medical history, which represented some clinical facts about each patient. Furthermore, the VAS values evoked by these medical histories show that our patient sample represented an unhealthy, medically complicated population; that is, the overall mean CPE VAS and CT VAS values before seeing a patient’s face were 45 (SD = 14) and 43 (SD = 16), respectively, out of a maximum of 100%. Thus, our 73 patients represent a relatively undifferentiated group, at intermediate risk for CPE, making them diagnostically difficult. Had their medical histories indicated CPE VAS and CT VAS values before seeing a patient’s face that were at the extremes (near 0% or 100%), then the patient videos may have more reliably deflected the probability downward or upward. Another factor to consider was that start and stop survey times and physician participants’ interviews indicated the reading of 73 medical histories and watching of 73 patient face videos to be far more difficult, time consuming, and emotionally taxing than we had anticipated. This would explain the high noncompletion rate, which probably means we had selection bias for physicians more comfortable with interpreting patient faces. Additionally, the DANVA-AF, a standardized test of facial affect recognition, comprised static photos that did not include disgust as a facial expression, which may have limited the ecological validity and relevance of this test with respect to the actual task of recognizing the emotions of sick patients. Lastly, in terms of determining accuracy, the criterion standard for CPE+ depended on the opinion of three clinician reviewers who had access to comprehensive outcomes of all patients; it is possible that other adjudicators would have had different opinions.

We found that clinicians might use patients’ faces to make clinically important inferences about the presence of serious illness and the need for diagnostic testing. However, these inferences may fail to align with actual patient outcomes. Our findings suggest that clinical educators should acknowledge the role of nonverbal stimuli in physician decision making and determine their role in cognitive error.

Acknowledgments: The work presented in this report was done at Indiana University School of Medicine.

Supplementary Material

acm-92-1607-s001.pdf (433.6KB, pdf)

Footnotes

Funding/Support: This work was supported by a Lilly Endowment Physician Scientist Initiative award to J.A. Kline.

Other disclosures: None reported.

Ethical approval: This work was approved by the Indiana University School of Medicine Institutional Review Board on August 1, 2014 (protocol #1208009246), and April 18, 2017 (protocol # 1503163622).

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A442.

References

  • 1.Kline JA, Shapiro NI, Jones AE, et al. Outcomes and radiation exposure of emergency department patients with chest pain and shortness of breath and ultralow pretest probability: A multicenter study. Ann Emerg Med. 2014;63:281–288.. [DOI] [PubMed] [Google Scholar]
  • 2.Schuur JD, Carney DP, Lyn ET, et al. A top-five list for emergency medicine: A pilot project to improve the value of emergency care. JAMA Intern Med. 2014;174:509–515.. [DOI] [PubMed] [Google Scholar]
  • 3.Robin ED. Overdiagnosis and overtreatment of pulmonary embolism: The emperor may have no clothes. Ann Intern Med. 1977;87:775–781.. [DOI] [PubMed] [Google Scholar]
  • 4.Coche EE, Müller NL, Kim KI, Wiggs BR, Mayo JR. Acute pulmonary embolism: Ancillary findings at spiral CT. Radiology. 1998;207:753–758.. [DOI] [PubMed] [Google Scholar]
  • 5.Richman PB, Courtney DM, Friese J, et al. Prevalence and significance of nonthromboembolic findings on chest computed tomography angiography performed to rule out pulmonary embolism: A multicenter study of 1,025 emergency department patients. Acad Emerg Med. 2004;11:642–647.. [PubMed] [Google Scholar]
  • 6.Kline JA, Hogg MM, Courtney DM, Miller CD, Jones AE, Smithline HA. D-dimer threshold increase with pretest probability unlikely for pulmonary embolism to decrease unnecessary computerized tomographic pulmonary angiography. J Thromb Haemost. 2012;10:572–581.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hall WB, Truitt SG, Scheunemann LP, et al. The prevalence of clinically relevant incidental findings on chest computed tomographic angiograms ordered to diagnose pulmonary embolism. Arch Intern Med. 2009;169:1961–1965.. [DOI] [PubMed] [Google Scholar]
  • 8.Self WH, Courtney DM, McNaughton CD, Wunderink RG, Kline JA. High discordance of chest x-ray and computed tomography for detection of pulmonary opacities in ED patients: Implications for diagnosing pneumonia. Am J Emerg Med. 2013;31:401–405.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.van Strijen MJ, Bloem JL, de Monyé W, et al. ; Antelope-Study Group. Helical computed tomography and alternative diagnosis in patients with excluded pulmonary embolism. J Thromb Haemost. 2005;3:2449–2456.. [DOI] [PubMed] [Google Scholar]
  • 10.Kline JA, Kabrhel C. Emergency evaluation for pulmonary embolism, part 1: Clinical factors that increase risk. J Emerg Med. 2015;48:771–780.. [DOI] [PubMed] [Google Scholar]
  • 11.Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2015 update: A report from the American Heart Association. Circulation. 2015;131:e29–e322.. [DOI] [PubMed] [Google Scholar]
  • 12.Feng LB, Pines JM, Yusuf HR, Grosse SD. U.S. trends in computed tomography use and diagnoses in emergency department visits by patients with symptoms suggestive of pulmonary embolism, 2001–2009. Acad Emerg Med. 2013;20:1033–1040.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mitchell AM, Kline JA, Jones AE, Tumlin JA. Major adverse events one year after acute kidney injury after contrast-enhanced computed tomography. Ann Emerg Med. 2015;66:267–274.e4.. [DOI] [PubMed] [Google Scholar]
  • 14.Adams DM, Stevens SM, Woller SC, et al. Adherence to PIOPED II investigators’ recommendations for computed tomography pulmonary angiography. Am J Med. 2013;126:36–42.. [DOI] [PubMed] [Google Scholar]
  • 15.Schissler AJ, Rozenshtein A, Kulon ME, et al. CT pulmonary angiography: Increasingly diagnosing less severe pulmonary emboli. PLoS One. 2013;8:e65669. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA. 2005;293:2609–2617.. [DOI] [PubMed] [Google Scholar]
  • 17.Lucas FL, Sirovich BE, Gallagher PM, Siewers AE, Wennberg DE. Variation in cardiologists’ propensity to test and treat: Is it associated with regional variation in utilization? Circ Cardiovasc Qual Outcomes. 2010;3:253–260.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Rothberg MB, Class J, Bishop TF, Friderici J, Kleppel R, Lindenauer PK. The cost of defensive medicine on 3 hospital medicine services. JAMA Intern Med. 2014;174:1867–1868.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Lucassen W, Geersing GJ, Erkens PM, et al. Clinical decision rules for excluding pulmonary embolism: A meta-analysis. Ann Intern Med. 2011;155:448–460.. [DOI] [PubMed] [Google Scholar]
  • 20.Singh B, Mommer SK, Erwin PJ, Mascarenhas SS, Parsaik AK. Pulmonary embolism rule-out criteria (PERC) in pulmonary embolism—Revisited: A systematic review and meta-analysis. Emerg Med J. 2013;30:701–706.. [DOI] [PubMed] [Google Scholar]
  • 21.Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89:277–284.. [DOI] [PubMed] [Google Scholar]
  • 22.Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014;89:197–200.. [DOI] [PubMed] [Google Scholar]
  • 23.Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84:1022–1028.. [DOI] [PubMed] [Google Scholar]
  • 24.Calder LA, Arnason T, Vaillancourt C, Perry JJ, Stiell IG, Forster AJ. How do emergency physicians make discharge decisions? Emerg Med J. 2015;32:9–14.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kline JA, Neumann D, Haug MA, Kammer DJ, Krabill VA. Decreased facial expression variability in patients with serious cardiopulmonary disease in the emergency care setting. Emerg Med J. 2015;32:3–8.. [DOI] [PubMed] [Google Scholar]
  • 26.Kline JA, Neumann D, Hall CL, Capito J. Role of physician perception of patient smile on pretest probability assessment for acute pulmonary embolism. Emerg Med J. 2017;34:82–88.. [DOI] [PubMed] [Google Scholar]
  • 27.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–381.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Nowicki S, Duke MP. Individual differences in the nonverbal communication of affect: The diagnostic analysis of nonverbal accuracy scale. J Nonverbal Behav. 1994;18:9–35.. [Google Scholar]
  • 29.Pelaccia T, Tardif J, Triby E, et al. How and when do expert emergency physicians generate and evaluate diagnostic hypotheses? A qualitative study using head-mounted video cued-recall interviews. Ann Emerg Med. 2014;64:575–585.. [DOI] [PubMed] [Google Scholar]
  • 30.Schubert CC, Denmark TK, Crandall B, Grome A, Pappas J. Characterizing novice–expert differences in macrocognition: An exploratory study of cognitive work in the emergency department. Ann Emerg Med. 2013;61:96–109.. [DOI] [PubMed] [Google Scholar]
  • 31.Pandharipande PV, Reisner AT, Binder WD, et al. CT in the emergency department: A real-time study of changes in physician decision making. Radiology. 2016;278:812–821.. [DOI] [PubMed] [Google Scholar]
  • 32.Calder LA, Forster AJ, Stiell IG, et al. Experiential and rational decision making: A survey to determine how emergency physicians make clinical decisions. Emerg Med J. 2012;29:811–816.. [DOI] [PubMed] [Google Scholar]
  • 33.Kabrhel C, Camargo CA, Jr, Goldhaber SZ. Clinical gestalt and the diagnosis of pulmonary embolism: Does experience matter? Chest. 2005;127:1627–1630.. [DOI] [PubMed] [Google Scholar]
  • 34.Kline JA, Stubblefield WB. Clinician gestalt estimate of pretest probability for acute coronary syndrome and pulmonary embolism in patients with chest pain and dyspnea. Ann Emerg Med. 2014;63:275–280.. [DOI] [PubMed] [Google Scholar]
  • 35.Ekman P, Friesen WV, Hager JC. Facial Action Coding System: An Ebook for PDF Readers. 2002Douglas, AZ: A Human Face. [Google Scholar]
  • 36.Widen SC, Pochedly JT, Pieloch K, Russell JA. Introducing the sick face. Motiv Emot. 2013;37:550–557.. [Google Scholar]
  • 37.Henry SG, Fuhrel-Forbis A, Rogers MA, Eggly S. Association between nonverbal communication during clinical interactions and outcomes: A systematic review and meta-analysis. Patient Educ Couns. 2012;86:297–315.. [DOI] [PubMed] [Google Scholar]

Articles from Academic Medicine are provided here courtesy of Wolters Kluwer Health

RESOURCES