The Doctor Will See You, And Stop Judging You, Now

How do you stop implicit bias from getting in the way of better health? This doctor wants to make learning how to manage bias as important as learning how to suture.

A small blue sphere orbits a larger green sphere on a black background, with "Science Quickly" written underneath.

Anaissa Ruiz Tejada/Scientific American

Illustration of a Bohr atom model spinning around the words Science Quickly with various science and medicine related icons around the text

This podcast is part of “Health Equity Heroes,” an editorially independent special report that was produced with financial support from Takeda Pharmaceuticals.

Have you ever gone to the doctor’s office and felt like they were judging you, maybe even before you opened your mouth? Unfortunately that’s probably a pretty common feeling and not because doctors are trying to be jerks. Like all humans, doctors have unconscious biases that can lead them to make unfair judgment calls. But those biases can pose a serious risk to people’s health.

Rachel Feltman: For Scientific American’s Science Quickly, I’m Rachel Feltman, and I’m joined today by Dr. Cristina Gonzalez. She’s a professor of medicine and population health and associate director at the Institute for Excellence in Health Equity at NYU Grossman School of Medicine.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Cristina—can I call you Cristina?

Cristina Gonzalez: Yes. Thank you for asking.

Feltman: Thank you so much for joining us today.

Gonzalez: Thank you for having me.

Feltman: Let’s start with a basic question: What is implicit bias?

Gonzalez: So implicit bias refers to the unconscious and unintentional mental associations that we make about others, often along lines of personal identity factors like race or religion or gender. But they’re unconscious and unintentional.

Feltman: How much does that tend to come up in a clinical setting, and why does it matter?

Gonzalez: It’s more likely to come up when we’re pressed for time; when we’re fatigued; probably when we’re hungry, although that doesn’t exist in the literature, to my knowledge, anyway; when we’re not really knowing the person in front of us very well and we may have incomplete data—honestly kind of every day at work, right, at different times of the day, different time pressures, etcetera.

And the reason why it matters is because it can actually influence our communication behaviors with patients. And so I wanna be clear that implicit bias isn’t, like, a moral indictment. It’s a coincidence of our lived experiences, how our unconscious mental associations go, but because it can influence our behaviors, we wanna—we work on that in, in our lab.

Feltman: What do we know about how implicit bias in a clinical setting can impact patients?

Gonzalez: How do you know when implicit bias has impacted the encounter, right? I know it’s because the vibe—I and others—that the vibe has changed, the nonverbals have changed, the patient may get a little more curt in their answers or shorter, you know?

And we do this in real life. And I keep getting people who say, “How do we, how do we teach that?”

Feltman: Mm.

Gonzalez: And, and I keep consulting other people. I’m like, “How do you teach it? Like, I don’t know.” [Laughs] And so I think we could talk about that as a challenge in case anybody writes in and gives us the answer [laughs]. I would love it so much.

Feltman: Yeah, maybe, maybe there’s, like, a, a body language coach out there ...

Gonzalez: Yes!

Feltman: An acting or movement coach who thinks about micro facial expressions. Like, somebody’s, somebody’s gotta be able to help with that.

Gonzalez: Yes—yes, yes, yes and yes.

If we stay focused on communication behaviors and communication skills, there’s the concept that’s called verbal dominance, meaning that if we have a 15-minute encounter, and if we are gonna center the conversation on racial bias, then if you have higher unconscious—implicit racial bias, more pro-white as a coincidence of your lived experience, then you’re likely to talk more in those 15 minutes when you’re seeing a Black patient compared to a white patient.

When you’re talking more, that means they’re talking less.

Feltman: Right.

Gonzalez: That means we’re likely to be asking their opinion less; we’re likely to be doing less shared decision-making, meaning getting their input on what the, what the treatment plan is, is it acceptable to them; asking if they have questions. Patients perceive less patient-centeredness. They perceive, in essence, a colder affect or vibe, if you will, in the encounter. And, and we also end up using more words that relay anxiety.

It’s the way we’re socialized and our unconscious ...

Feltman: Sure.

Gonzalez: Mental associations. So it’s just humans.

Feltman: Yeah. So let’s talk about what you’re doing at your lab. What interventions have you been working on, and what’s been working?

Gonzalez: So we’re interventionalists, right? But instead of needles or devices or pills, we use education. So we recognize when implicit bias may have impacted the patient encounter, right? And then we teach people skills to be able to manage that negative influence, negative impact—partner with the patient and then restore rapport, you know, discuss ways of moving forward, etcetera—to be able to have the positive outcomes we wanted in the first place. And so a lot of what we talk about is basic “humaning,” to be quite honest. And so—but people get nervous when it’s something about race or religion or gender or sexual orientation, and people worry.

Feltman: Mm.

Gonzalez: So if I can take a step back and explain that the—we were the first to study—we weren’t the first to study patient perceptions of bias and discrimination in their encounters, but to our knowledge in the literature, our lab was actually the first to study it and then stop and say, “Okay, great. Not great that it’s happening, but great that we’re talking about it, that you’re talking about it.” And then, “What should I teach myself and the medical students?” ’Cause at the time it was just—medical students was my audience. And they, they said, “Just apologize.” And I was like, “I’m sorry, what?” [Laughs] They were like, “Just apologize.” And I was like, “Oh—I can teach that! I can do that,” right?

And so it, it, of course, evolved from there, but we did a, a focus group study with Black and Hispanic patients in English and Spanish across New York City, across the socioeconomic spectrum, and over and over again they were so generous with us. They said, “We know you’re human; we don’t need the encounter to be perfect and no bias, etcetera, but once it happens we can’t have it ignored,” ’cause that second insult was a real attack on their, like, their, their core, their dignity.

And sometimes we may also—they may also perceive bias—we, too, when we’re patients—we also may perceive bias when actually it’s a routine question ...

Feltman: Sure.

Gonzalez: But based on our lived experiences, right—being followed around in stores, being accused of wrongdoing, being questioned about our identity, etcetera—we may take it as bias. And that can be hard as a physician or a medical student, being like, “I’m supposed to ask you this.”

And so we teach skills to step back and depersonalize it so that you can, again, partner, apologize if necessary or explain why and be able to move forward together. But it’s the ignoring part that was really, really hard for patients.

Feltman: Sure. Well, and I think a, a lot of folks who are familiar with implicit bias, you know, thanks to a great surge in research over the last few years, they might think that most of the battle is just becoming aware that we all have implicit bias because that’s really revelatory and difficult for a lot of people ...

Gonzalez: Sure.

Feltman: But it sounds like that is very much just the first step in actually mitigating it. Would you say that’s right?

Gonzalez: Absolutely. And so there’s been a lot of talk in the literature, and in the lay press as well, about awareness being enough, right? Because if we become aware, then our good intentions will prevail. If it were that easy, I would happily be studying something else, right?

And so I think that, for us, it’s becoming aware, and then—but it’s not fair to have people be aware and then say, “Go be better,” you know, because, of course, they were trying to be their best at the very beginning. And so that’s where the skill comes in.

Feltman: But what happens when someone is resistant to the idea that this is something they need training for?

Gonzalez: For implicit bias, I think that you have to have safe places to fail. And in the educational literature that’s called a critical incident, and in a critical incident you may practice something and then have that internal aha! moment, and internal aha! moments happen in private–that you get this reaction that you don’t like when your actions don’t match your values. If you could have that privately, right—maybe with a interactive computer case, for example—then you’re more likely to change your actions to match your values. If you have it publicly, you’re actually at risk of changing your values to match your actions ’cause you feel the need to save face.

Feltman: How do you go about measuring implicit bias in, in a clinical setting? Is there a way for you to see how much work there is to do?

Gonzalez: There are tests, like the Implicit Association Test; it’s a free and publicly available test. There are other tests, and they’re what’s—what are called latency response tests, meaning they see how quickly you react to two opposite concepts—joy, evil; male, female; just thinking about possibilities—down to the millisecond. So see how quickly you react to different concepts. And that is supposed to measure your implicit bias.

They’re interesting, and I don’t use ’em diagnostically. They tend to mirror what we would expect in terms of the way we’re socialized. So in the clinical setting, as you were saying, how would you measure it? We made high-fidelity, meaning very realistic, simulations that have the types of stressors you would have in the clinic, so interruptions, a very pleasant but somewhat meandering patient—because they’re not a professional historian, okay ...

Feltman: Yeah.

Gonzalez: They’re telling the story the way it happened to them—and various things in, in the clinic that you would see, in a simulated encounter.

Physicians didn’t know why they were going into it. Standardized patients, I hope, are not watching because they were blinded to [laughs] the purpose of our study and were phenomenal. And so nobody knew what was going on. And so we actually were able to measure the physicians’ racial implicit bias on that Race Implicit Association Test at the end so that they—all the behaviors had already happened and correlate it with communication behaviors.

And so you’ll see it, again, in patient education, establishing rapport, eliciting all a patient’s concerns, active listening—like, “How’re you doing right now?”—it actually makes a difference in those types of behaviors. And also things like interpersonal distance.

And so that’s something that we can actually monitor. I always tell people, like, you know, “Take—put a, put a coffee mug, and make sure you don’t go behind it.” The patient’s not gonna know why there’s a coffee mug. They were always drinking coffee, right [laughs]?

Feltman: Yeah.

Gonzalez: But, but that’s a nice little cue for yourself of, like, “Oh, let me lean in a little bit and make sure that I’m looking at the person, connecting, making that eye contact, etcetera.” And so it’s in—honestly general communication skills, but it’s different with different patients.

And so we video-record them and then are able to analyze and find the inflection points of where to teach to be our best selves.

Feltman: What does that teaching and learning process look like for medical professionals?

Gonzalez: Some people like to call communication skills “soft skills,” but they’re not—I mean, I love warm and fuzzy, and so—but they’re not, right, ’cause that implies that other skills are harder, whereas it’s, it’s very difficult to communicate effectively. And so we call them critical humanism skills.

Last week, actually, with some first-year medical students—they were so good—we practiced role-plays ...

Feltman: Mm-hmm.

Gonzalez: Where a student had the role of the physician, the—another student had the role of the patient, and when you had the role of the patient, of course, you’re not gonna be a 19-year-old Black man or a 22-year-old, you know, young woman who identifies as a lesbian if you’re not, but it—we practice perspective-believing, so they don’t see what each other has and the instructions each other has. And I ask them to act very well, [as] if you’re trying to win a Tony Award.

And so they act out the role-plays. And in the—in one of the scenarios, the, the physician asks a routine question, but the—based on the patient’s lived experience, they perceive it as bias. It’s, you know, “Do you smoke cigarettes?” kind of a question to the 19-year-old-young man who hurt his arm playing tennis, right? And they practice what we call verbal procedures, which are verbatim statements of actually apologizing, or acknowledging if you’re not comfortable apologizing, and restoring that rapport.

Feltman: Mm.

Gonzalez: They say it to the other student word for word, the other student gives feedback, and then they practice it again. And then they, they articulate it verbatim again, and then the student—and then they decide on a final. But that do-over has been called a “gift.”

The other one, if I may, is a case where it’s a young woman—this actually happened; it’s de-identified, but it actually happened to a young woman—she is on oral contraceptives for endometriosis, which is a painful condition in your uterus. Long story short: low-dose, in essence, birth control pills will help the symptoms.

Feltman: Mm-hmm.

Gonzalez: And we’re helping her, and she went to a new doctor and asked for this, and they said, “Well, this is better at preventing pregnancy.” And she’s like, “No, I don’t really need it for preventing pregnancy.” And [they’re] like, “Well, no, this is really better.” And she’s like, “No, I only have sex with women.” And the physician’s like, “Well, it’s New York; you never know.”

Feltman: [Inhales deeply]

Gonzalez: And, and I’m sure the physician wasn’t a terrible person. I’m sure she was flustered or who knows, right ...

Feltman: Right.

Gonzalez: But, but that—it’s not to vilify the physician at all ...

Feltman: No, no ...

Gonzalez: But, but it gives—and I understand your reaction ’cause that’s how the patient felt.

Feltman: Yeah.

Gonzalez: Right? She’s like, “Well, no, I actually, I actually do know.” [Laughs] And so, so I—so we don’t make, make the students make the actual biased statement, right, the assumption, but when they role-play it, you can always tell when they get to that part ’cause someone’s always like, “[Gasps] I am so sorry I made the assumption.” And, and they are able to practice in a safe space.

But I was talking to a surgeon, and she was making the analogy of building muscle memory. So if you’re like, “What would you say?” You’d be like, “Oh, I’d, I’d apologize.” “No, no—what would you actually say?” And people are always like, “Hmm.”

And so you build the muscle memory so that you’ve got those in your clinical skills toolbox, and you’ve already practiced restoring rapport or apologizing when you actually make an assumption, right—either way. And then you’re ready for when—you’re more ready than you would have been when it happens in real life.

Feltman: In what ways does our medical system or, you know, the current establishment fuel implicit bias? What needs to change there?

Gonzalez: Huh.

Feltman: I know that’s kind of a big question [laughs].

Gonzalez: I, I was gonna say, “How much time do we have?” No, but, but—well, so, again, it’s not intentional, right? No one’s in a back room being like, “Heh, heh, heh, we hurt 10 patients today,” right? So I get that. But it—there’s a couple of things. There’s education.

Feltman: Mm-hmm.

Gonzalez: So we end up pathologizing race or pathologizing sexual orientation, meaning, meaning that, like, we’ll say, you know, “Race is—Black race is a risk factor for high blood pressure.” Actually, no, it’s not. Racism is, but race is not. Or we’ll do things like whenever there’s a sexually transmitted infection type of case, it’ll be, you know, a, a gay man. And, and we make these stereotyping cases, which I don’t think anyone’s doing intentionally, but you have to intentionally not do it and make sure that, you know, we put different humans in, in all kinds of different cases when people are learning.

Feltman: Yeah.

Gonzalez: So that we don’t reinforce stereotypes. And—because that’ll actually limit our diagnostic abilities later.

The time pressures—appointments are, like, 15 minutes long, sometimes 10 minutes long. The way that people are forced to churn, for lack of a better—terrible word, sorry—through patients because of systemic pressures, right—and it’s not even like your hospital that’s saying that; it’s really just the way the whole system is—makes you have to have these snap judgments.

You know, we were just at a National Academy of Medicine diagnostic equity meeting, and we were talking about, “How can we make it—a change so that we have more time?” And that way—I think that’d be more cost-effective, right, because you’d be able to actually have time with your patient.

And the last thing I’ll say is that the burnout is, is real, and so it sets us up to make snap judgments because efficiency and what’s called throughput, right, getting patients through, is the pressures that we’re—under which we’re working.

Feltman: Yeah. You know, beyond the obvious individual risks to a patient, how is this impacting health care?

Gonzalez: Think about how implicit bias influences communication skills. And then we think about how—the patient experience, right, patient satisfaction, the patient experience scores, for example, have an influence on reimbursement rates. And really what we want is to help patients have a better experience, right? But if we can get that system-wide driver to say, “Ooh, and it’ll help your reimbursement,” maybe we could get some buy-in.

On the missed—misdiagnosis and thinking about making assumptions about patients, not believing patients, there’s a concept of testimonial injustice, meaning that you’re less likely to be believed for what you’re actually saying if you’re, you know, a woman, if you’re of a minoritized race or, or ethnic background. So if you don’t believe the patient or you kind of dismiss them—there’s lots of literature in them for women, right—and then you end up having delayed diagnosis, all those costs to the person and to the system could be avoided.

Addressing implicit bias isn’t—I’m not trying to make it a—paint it as a cure-all, right ...

Feltman: Sure.

Gonzalez: It’s a—health equity and health disparities is a very what’s called a wicked problem, right? It’s very complex. So I think it is a important tool to help us take excellent care of all of our patients and achieve health equity.

Feltman: And in an ideal world, you know, five, 10 years from now ...

Gonzalez: I’m gonna put on my ideal word—world hat. I’m ready.

Feltman: How do you want medical schools and hospitals to be thinking about implicit bias?

Gonzalez: So I want this to be just like anything else. When students and, and residents get the wrong diagnosis, we help them get the right diagnosis. Even when we get the wrong diagnosis, we work on getting the right diagnosis. I want this to be just like any other clinical skills ... that we could have what we have now, which—analogous to what I call, I should make it more gender-inclusive, but it’s—we call it the “I got a guy” consult. And you’re sitting there, and everybody’s really busy, and you’re all on the computer after you’ve seen your, your patients and writing your notes, and you’re like, “Hey, I got a guy.” And the least busy people roll their chairs over, and you run the case by them. And I love when they roll their chairs to me, and I love rolling my chair to them.

I really wish that one day we could have, you know, bias buddies and these “I got a guy” consults—and, again, forgive the, the term—but the “I got a guy” consults could also be about: “I was talking with someone today, and I think maybe I said a biased thing or they maybe perceived a biased thing.” And I won’t worry that you’re—you think I’m racist, just like I never worry you think I’m a bad physician when I do my “I got a guy” consult.

Feltman: Yeah, absolutely.

Gonzalez: That would be great [laughs].

Feltman: Cristina, thank you so much for coming in to chat today ...

Gonzalez: Thank you as well.

Feltman: And for the extremely important work that you do.

Gonzalez: Thank you so much for having me. It’s been an honor. Thank you.

Feltman: That’s all for today’s episode. But if you want to hear more of my conversation with Cristina, you’re in luck: a longer version of our chat is available on Scientific American’s YouTube channel. You’ll find the link to that video in our show notes.

Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

For Scientific American, this is Rachel Feltman. See you next time!

Rachel Feltman is former executive editor of Popular Science and forever host of the podcast The Weirdest Thing I Learned This Week. She previously founded the blog Speaking of Science for the Washington Post.

More by Rachel Feltman

Fonda Mwangi is a multimedia editor at Scientific American. She previously worked as an audio producer at Axios, The Recount and WTOP News. She holds a master’s degree in journalism and public affairs from American University in Washington, D.C.

More by Fonda Mwangi

Jeff DelViscio is currently chief multimedia editor/executive producer at Scientific American. He is former director of multimedia at STAT, where he oversaw all visual, audio and interactive journalism. Before that, he spent more than eight years at the New York Times, where he worked on five different desks across the paper. He holds dual master's degrees from Columbia University in journalism and in earth and environmental sciences. He has worked aboard oceanographic research vessels and tracked money and politics in science from Washington, D.C. He was a Knight Science Journalism Fellow at the Massachusetts Institute of Technology in 2018. His work has won numerous awards, including two News and Documentary Emmy Awards.

More by Jeffery DelViscio