What if AI could help solve many of medicine’s biggest challenges, from doctor burnout to diagnostic errors to healthcare access? Meet Dr. Charlotte Blease, a pioneering researcher at Harvard Medical School and author of Dr Bot: Why Doctors Can Fail and How AI Could Save Lives. She shares fascinating insights about why our current medical system struggles to serve both patients and providers, and presents a compelling vision for how artificial intelligence could transform healthcare for the better.
In this illuminating conversation, you’ll learn:
β’ Why doctors can only keep up with 2% of new medical research (and what that means for your care)
β’ The surprising reason patients often withhold critical health information from their doctors
β’ How AI is already outperforming human doctors at diagnosing certain conditions
β’ What the future of medicine might look like when AI and human providers work together
β’ The critical ethical questions we need to address as AI becomes more integrated into healthcare
Whether you’re a healthcare provider, patient, or someone interested in how technology is reshaping medicine, this conversation offers an fascinating look at one of the most important developments in modern healthcare. Dr. Blease brings deep expertise and nuanced perspective to help us understand both the incredible potential and important challenges as we navigate this transformation in medical care.
This episode is part of our special Future of Medicine series, exploring the innovations and breakthroughs changing healthcare as we know it. Tune in every Monday through December for more conversations with pioneering researchers and thought leaders at the forefront of medical innovation.
You can find Charlotte at: Dr Bot Substack | Website | Episode Transcript
If you LOVED this episode, don’t miss a single conversation in our Future of Medicine series, airing every Monday through December. Follow Good Life Project wherever you listen to podcasts to catch them all.
Check out our offerings & partners:Β
- Join My New Writing Project: Awake at the Wheel
- Visit Our Sponsor Page For Great Resources & Discount Codes
_____________________________________________________________________________________________________
Episode Transcript:
Jonathan Fields: [00:00:00] Hey there. Every Monday in November and December, we’ll be featuring our Future of Medicine series, where we’ll be spotlighting groundbreaking researchers, cutting edge treatments, and diagnostic innovations for everything from heart disease, cancer, brain health, metabolic dysfunction, aging and pain, and also sharing breakthroughs in areas like regenerative medicine, medical technology, AI and beyond. It’s a brave new world in medicine, with so many new innovations here now and so much coming in the next 5 to 10 years. And we’re going to introduce you to the people, players and world changing discoveries that are changing the face of medicine today and beyond in this powerful two-month Future of Medicine series. So be sure to tune in every Monday through the end of the year and follow Good Life Project to be sure you don’t miss an episode.
[00:00:48] And today we’re bringing you a conversation about AI and medicine that might forever change how you think about your relationship with doctors, medicine, and the future of staying healthy. I mean, what if the key to healthcare it wasn’t about creating new drugs or surgical techniques, but about transforming the entire system through artificial intelligence. Picture a world where your doctor never misses a crucial detail in your medical history, where diagnosis becomes dramatically more accurate and where care becomes more personal, not less, because of AI. These aren’t sci fi dreams. They’re happening right now. My guest today, Doctor Charlotte Blease, stands at the intersection of healthcare technology and human behavior as an associate professor at Uppsala University and researcher at Harvard Medical School’s digital psychiatry department. She spent decades studying how we can make healthcare better for everyone. Her new book, Doctor Bot Why Doctors Can Fail and How AI Could Save Lives, offers a powerful window into how artificial intelligence might transform modern medicine. One of the things that really surprised me in our conversation was learning that doctors can only keep up with 2% of new medical research. I mean, think about that. It means 98% of new medical knowledge isn’t making it into your doctor’s office. But that’s just the beginning. We explore why patients often hide critical information from their doctors, and how AI might help create a healthcare system that really speaks to all of these things, and serves both patients and providers so much better. So excited to share this conversation with you. I’m Jonathan Fields and this is Good Life Project.
Jonathan Fields: [00:02:30] We’re having this conversation at a time where you’re in Ireland right now, I’m in the US, but all over, I think a lot of a lot of individuals, a lot of countries are really the re-examining the way that their medical system is functioning. What’s working, what’s not working, who is it serving? Who is it not serving? You have a very strong point of view about the state of medicine these days, and how in many ways, it’s actually failing patients. Take me into this.
Charlotte Blease: [00:02:57] I do want to sort of begin by saying medicine is a great human success story to some extent. One of the things I say in the book is it’s a victim of its own success, because we’re living longer, we’re living with more chronic ailments, and that all puts pressure on healthcare systems. So, as you said, even in wealthy countries throughout the world, we’re seeing that our systems are creaking, there’s delays in getting medical attention, and our clinicians are burnt out. We also don’t have enough clinicians as well. So against that backdrop, doctors are overworked, burnt out. We know that that’s sort of a petri dish for creating errors as well. So there are real challenges in giving patients the kind of accessible care that they need. One thing I do in the book, because I don’t want the book to be, and I don’t want the advocate for less funding in healthcare. I think that’s been a problem in many health systems. But what I do say at the outset is imagine a health system that is the most lavishly funded. Patients could still get access whenever they wanted it, but they still have to access it in traditional ways. And I would still say there’s going to be many problems. There’s still going to be a ceiling effect on care as we know it. And that’s because doctors, they are not gods. We want them to be gods, but they’re not. And what I try to do is look at the very human limitations with healthcare, those psychological limitations with healthcare. But I also say, look, when you physically have to go to a bricks and mortar hospital as well, that’s already going to create challenges and marginalize many patients, some of whom are in the greatest need.
Jonathan Fields: [00:04:52] Let’s tease out a handful of the things that you just shared, and go a little bit deeper into each one of them. One of them is this notion of doctor burnout and also mental health issues, depression, anxiety associated with that. Take me a little bit deeper into this. How is this happening? Why is it happening and how is it showing up in patient care?
Charlotte Blease: [00:05:11] It shows so about 50% of doctors in the UK and the US just under it in the US, recent studies found are say that they’re burned out. The figures were much higher during Covid and about 20% of US doctors are depressed. In in the UK, about 4 in 10 GPS family doctors say that they can’t cope with their workload. So the burdens are absolutely enormous. Paperwork takes over. That’s documentation. And administrative tasks take over more than 50% of a doctor’s daily tasks, so that also adds to burnout. And a lot of doctors talk about date night with epic. Epic is the largest electronic health vendor in the US, so it’s basically a case of doing the documentation after the patients have left or on the weekends, which, incidentally, isn’t a great way to keep accuracy in the medical record. So that’s also a ramification of burnout. Another aspect of it is doctors have to multitask. And multitasking is sort of a misnomer, psychologically speaking. People don’t tend to multitask for the most part. They’re switching between tasks. And medicine is a field where that’s ultimately what is expected of you. You’ve got to be a sort of diagnostic wizard. You’ve got to extract the symptoms and do tests with patients, but you also have to give that compassionate bedside manner and you’ve got to do documentation. These are huge burdens on any The individual. So the doctor’s sort of like this one man band. They’re expected to be omnicompetent. And I argue that’s really just too much to ask without expecting errors to arise. And to go back to the issue of depression. Look, in the States around one medical school graduating class has about 300 to 400 doctors kill themselves every year. So we’re talking about really serious savage pressures on doctors, those who obviously do have implications. We’ve got the studies for the implications on patient care. So doctors who are depressed or burnt out do report suboptimal practices. Give another example. Doctors who are depressed are six times more likely to make medication errors. So there are real implications here.
Jonathan Fields: [00:07:42] To be clear, we’re not pointing the finger at doctors here. Not at all. We’re not saying, oh, this is your fault. What we’re saying, what you’re saying it sounds like. And tell me if I’m getting this right is we’ve got systemic issues here. Like the way that the training is set up, the way that the practice is set up, the way that often the funding mechanisms work and the requirements for administrative, it all just compounds and it piles on and piles on, piles on. And this isn’t a doctor being incompetent or mal intended, it’s just the stunning nature of the pressure. There’s an onslaught of what one human being is expected to do. That is, it sounds like, wholly unrealistic.
Charlotte Blease: [00:08:18] That’s 100% my take. And actually, one of the things I’m most pleased about hearing from doctors who have read the book is they say, you get it. And I was really anxious that I needed to sound sympathetic, and I am sympathetic to doctors. That’s sort of the point of the book. In some ways. You can’t expect it to talk about the gee whiz of AI, and you can’t have a revolution unless you know what it’s for. So the purpose of this book was to say, look, what are the problems to which AI might be a solution? I was very taken and had been for many years, thinking about the need for a psychological portrait of the doctor patient encounter, but also what doctors have to do. I mean, the way I, I sort of flip it by saying, I think it’s amazing that doctors are able to do what they do as well as they do, but we do need to have a really open conversation about the fact doctors can fail us sometimes. We’ve got to have that without finger pointing. It has to be sort of just an honest description of what we’re expecting human beings to do.
Jonathan Fields: [00:09:23] This shows up in two ways. One is, as you described, mistakes in diagnosis and treatment. If you’re so overburdened or so overwhelmed, you’re dealing with your own mental health issues. Of course, it’s going to affect the way you think, the way you feel, the way you see you intake and synthesize. And that’s got to at some point affect the way that you’re actually diagnosing and treating. But then the second side which you described is healthcare providers are human beings also. And we want the experience for them to be not just humane but okay, like nourishing, not just survivable, but good. And we’re not doing justice to them in a lot of ways, too.
Charlotte Blease: [00:10:01] Absolutely. And even beyond. I mean, one of the things I do talk about as well is even beyond the crushing and savage pressure that doctors are under within current health systems. I talk about the knowledge demands on doctors, and I make made this calculation. A couple of years ago, I looked at PubMed, which is this repository for the latest biomedical research. I’ve discovered that every by the way, the figure is probably higher now. Every 39 seconds there’s a new biomedical article published, and I worked out that if doctors even just read 2% of these latest findings, they’d be spending 22.5 hours per day. The task of trying to update your own medical knowledge is colossal, and this is where I think doctors really have the greatest burdens on them of any. If you like, white collar professional, the expertise that’s required and the constant updating of of of information is just absolutely mammoth.
Jonathan Fields: [00:11:04] Yeah. I mean, 22 hours, that’s even to read a fraction of what’s coming out that’s new. And then you pile on top of this, this notion that in this particular industry, the stakes can be life and death or like profound, profound limitations in a person’s life, potentially for long windows of time. And not just them, but whoever’s affected by that. So it’s the stakes. I feel like that also are part of the pressure here.
Charlotte Blease: [00:11:29] Absolutely. And I think this is why doctors are very understandably anxious about being accused of of blame when error arises. Because unless we we have the premise that doctors are somehow gods and they fail this this is completely inevitable. Having said that, studies do show that that doctors tend to practice evidence based medicine only around half the time. The way I flip it is, I say, we’ve got a number of reviews that show that. And by the way, in some countries it’s going to be the if you like, adherence to optimal evidence based practices is less. It’s very hard to when you when you’ve established your medical education, gone through medical school, your medical education tends to be set. So if there’s omissions or biases within that education, it’s really hard as a human to recalibrate to keep up to date. You can in some ways, but it’s very, very challenging to do it when you’re actually on the job. And this is also why sometimes doctors don’t practice the latest evidence based practice, and it’s also a reason why older doctors tend to be even less adherent to evidence based practices, because they’re set in ways of prior learning.
Jonathan Fields: [00:12:48] So if they’re not practicing evidence based practices, what are they practicing?
Charlotte Blease: [00:12:52] They’re practicing how they’ve been taught. And in some respects it’s kind of an apprenticeship. So how you’ve learned what you learned in medical school, then you slowly put it into practice when you first see patients. And it’s a practical learning as well. But it tends to be set and in some respects hardened by what you’ve learned. It’s not to say it doesn’t change. Of course it does. And doctors keep up to date in various ways. Things will change, but it’s very hard to keep it that really the latest, most modern, evidence based learning. Because as a human, you just can’t do all of that while still practicing. Like I say, very hard to recalibrate your learning.
Jonathan Fields: [00:13:33] Yeah. And like you said, if literally it would take you 22 hours a day just to keep up with 2% of what’s coming out. I mean that it’s almost impossible. And yet I would imagine if you asked many, many doctors, they would actually tell you I’m practicing evidence based because it’s based on the evidence that they learned on. But it’s not current anymore.
Charlotte Blease: [00:13:52] This is true. Another issue that makes things much more difficult for doctors is such a thing as medical reversals. So, uh, latest evidence can actually change certain kinds of practice or evidence or ways of treating patients certain standards of care. We might sort of consider them to be unimpeachable. They’ve got evidence to support them. But newer evidence very often reverses or changes that standard. And that happens a couple of different studies or researchers who have worked on this. If you look at sort of top tier medical journals, you know, over ten years and subsequent studies, the subsequent studies, you know, suggest medical reversals or significant changes around 40% of the time. So there you can see that also creates a further headache for doctors.
Jonathan Fields: [00:14:44] Right. It’s like a whiplash effect. It’s like, okay, so there’s this new research out that says this is the new standard of care. This is the new approach based on the evidence. And then a decade later, 40% of those, it’s kind of like, oh, we were just kidding.
Charlotte Blease: [00:14:57] It completely changes. Yeah. So this is always an issue. And then of course, the idea is how do you we can get to that later, but how do you integrate this informational support with for the doctor to to use the latest findings. And then that’s the issue of how do you integrate those kinds of clinical decision support tools within the workflow of the doctor. And that’s really hard to get right as well.
Jonathan Fields: [00:15:24] Yeah, and we’ll be right back after a word from our sponsors. I want to stay on just exploring a little bit of what’s going on currently, but I also want to flip sides a little bit to the patient side. One of the things that you explore and that I’ve heard is this notion of how patients show up. And very often the medical paradigm now only allows a handful of minutes between a patient and a healthcare provider. So a lot of times you feel really rushed. You feel like, what do I say? What do I not say? Where do I focus? What do I share? What do I not share? Tell me a little bit about the patient side of it here.
Charlotte Blease: [00:16:02] Patient side of it is so critical as well. And actually, you know, speaking to that issue of what are you saying in an appointment when you know, you’ve got very little time in the National Health Service in the UK? Many years ago, I saw in a waiting room there was a poster that said one patient equals one problem equals ten minutes. Now you’ve got as a patient you’re thinking, what was the real leading problem that brought me here? So then I only talk about one problem. Therefore, how do I know you kind of need it? Puts the cart before the horse because you need to know have expertise in order to know if different clusters of problems are linked or what you bring up. So there’s real pressures on patients, as you say, to decide when to visit the doctor. Don’t overload listeners with statistics, but in front of a very revealing American time use survey. So in other words, how people spend their time in America. And this was published by 2020. And it found that adult patients take two hours out of their day to go for a 20 minute appointment with their doctor. So when you think about that is very disruptive and we all tend to know this, but from the doctor’s side, you know, you just you’re either turn up or you don’t turn up.
Charlotte Blease: [00:17:17] But if it’s a no show, you know, that’s disappointing and that’s a bad thing in some ways you’re sort of like, that’s a bad patient. But from the patient side, it can be a logistical nightmare. I should flag up as well if you are on a low income, this study found you spend even longer out of your day, something like 28% longer than that two hours. So, you know, if you’re a gig economy worker, you’re not employed. You might be relying on public transport or there are other kinds of constraints and you may just decide, you know what? To hell with it. I’m not going to go to the appointment. Maybe it’s not important you delay seeking help until it’s too late. So the logistics of getting to the sort of bricks and mortar appointment, deciding what’s important, and also talking about symptoms. One thing I discuss in the book is talking about symptoms that may be socially sensitive. It could be mental health problems, or it could just be embarrassing symptoms. And some of these, incidentally, could be cancer, red flag symptoms that patients just don’t want to flag up. And I do say in the book and it sort of sounds hyperbolic, but patients are literally dying of embarrassment, and I bring evidence to bear on that.
Charlotte Blease: [00:18:30] And this is a side, I think, of the medical visit that from the doctor’s perspective, and they may pride themselves in being the most friendly, compassionate physician in the world. And they may well be all of those things. But actually, there’s a status difference in the medical appointment. That means patients know by dint of the fact they’re consulting with an expert that tends to incur a kind of face saving behavior, and people then tend to be a bit more subordinate. They don’t want to reveal too much. They don’t want to appear stupid. They don’t want to ask too many questions, and they don’t want to encroach on their doctor’s time because patients know doctors are incredibly busy. So you’ve got all of these other sides of the equation that tend to silence patients and also tend to silence the most marginalized patients. So the effects there are even greater for, again, people with less education, people with low incomes, patients who don’t share the same races or doctor where there may already be some communication breakdowns or don’t share the same first language. Elderly patients. This is a problem as well that we need to discuss more. I think that tends to be overshadowed, I think, in discussions about about healthcare.
Jonathan Fields: [00:19:49] Yeah, I mean it’s so fascinating. And you would like to think, well, I’m I’m you know, if you’re a patient, you walk into a doctor’s office. Well, I’m now under the care of a professional. I can say anything I need to say. I won’t be judged. There would be completely fine. But their scripts running in all of our heads. Do we have social conditioning? We have familial conditioning that say, like what is or isn’t appropriate or how I still want to be seen in a particular way, even by my treating professionals. So there are certain things we may or may not say. Or maybe, you know, you live in a smaller area where, like the healthcare provider is actually just part of your community and somebody you’re who you’re going to see on a regular basis, and there’s something sensitive and you feel really uncomfortable saying it to your neighbor three doors down. It’s a very human thing. It’s not malicious. It’s just it’s very understandable.
Charlotte Blease: [00:20:35] It’s completely understandable. And, you know, you get the other side of it is people get a boost from being with their doctor, too. There is a certain psychological boost because you’re in close proximity to someone who is, in that context, higher status than you. And so psychologists who work on status psychology talk about those sort of power differentials. And being close to somebody who’s prestigious does give you sort of a warm, fuzzy feeling. Patients give their doctors presents, you know. And there’s actually guidance about that as well. There’s not many professions or occupations where there’s actual guidance to say, look, you cannot exceed this amount for giving this person a present or a gift. But yeah, there are issues here that it with the best will in the world are still going to arise because of of just how how human psychology has evolved and how we behave in certain contexts. And we can sort of tell ourselves, look, I should behave differently. I shouldn’t be subordinate, I should say it as it is, but it’s actually really challenging to do that in a pressurised encounter on both sides for both parties, right?
Jonathan Fields: [00:21:45] I mean, human nature is human nature, and in a 7 to 10 minute or 15 minute at the outside interaction, especially if it’s somebody you don’t really know very well. I mean, it’s also really hard to have trust in such a short amount of time. I remember talking to a friend once who’s a therapist. There’s this common phenomenon that often therapists call the doorknob moment, which is you go through a full year, 55 minute session with a therapy client, and then they leave. And as soon as their hand touches the doorknob, they’re about to walk out the door. They kind of turn to the therapist and say, oh, one more thing. And like, that’s the one thing that they actually came for, and it took them until the final seconds to finally feel comfortable enough to surface the real thing that they were there for. And when you only have a handful of minutes, there’s not even time for that.
Charlotte Blease: [00:22:33] This is a real issue. And in fact, this happened in my my own father, my late father, when he had bowel cancer and he was embarrassed. I think he went to the doctor with an ingrown toenail or something like that and then, you know, spent the visit discussing this very minor issue. And then at the end, as he left, he turned and said, look, I’m having a change in my bowel habits. And that led to him getting through subsequent tests and all the rest of it, a diagnosis of bowel cancer. But that’s pretty typical. That kind of, if you like, syntax of the visit is really normal. People just they’re embarrassed to talk about these things in front of other human beings. And as I say in particular people, that they may really respect and want that individual to think well of them. So yes, these are psychological obstacles.
Jonathan Fields: [00:23:22] Let’s shift gears a little bit. This is sort of the state of medical care. And as you said, there are a lot of well-intended people and players here. But the nature of the broader system is just making it really hard for providers to show up and give the way they want to give, and patients to show up and get what they need. When we zoom the lens out now, we sort of look at current times and we bring in the conversation around AI and how it might integrate into the experience of practice and medicine. And we think about the problems that we just noted. Talk to me about what we’re seeing now in terms of the possibilities on the diagnostic side, on the treatment side, just on the broader systemic side of how AI may step in and start to help us reimagine some of these issues.
Charlotte Blease: [00:24:13] Ai is very good at seeing things that humans can’t see. It does pattern recognition at scale and largely what doctors are doing when they’ve learned they’re, uh, undergone their medical education is they’re sort of translating their learning into seeing. It’s a form of pattern recognition, but and it’s very instinctual as well. Ai can do this in ways that encompass vast amounts of pattern recognition and seeing kinds of patterns that as humans instantly updating patterns in ways humans can’t do that easily. So give the example. If we said, you know, keeping up to date is a real headache, consider rare diseases. There’s about 7000 rare diseases worldwide. About 250 are identified every year. So you can imagine that as a doctor, it’s one thing knowing the typical conditions that you might see. But then there’s rare illnesses. And I sent my family. There’s a rare illness as well, which isn’t all that unusual for families to have some kind of of a rare illness or genetic illness. But in my my family’s case, my eldest brother waited 20 years for a diagnosis. And that’s not all that uncommon either. People having these sort of long delays until they see a doctor who might recognize the cluster of symptoms.
Charlotte Blease: [00:25:34] So when you’ve got AI with something like diagnostics like that, it can come to a rare illness diagnosis very, very rapidly. And we’ve seen that already with this newer generation of tools. They’re called generative AI tools. Many of your listeners will know about these. But like talking to the internet on steroids, these kinds of tools, but they learn from vast troves of publicly accessible information. If you take just even that example, there was a study conducted by Austrian researchers. It fed rare illness symptoms into ChatGPT and within eight responses got to 90% of the diagnoses. Wow. For the rerun. So you consider how shorter that diagnostic odyssey could be. It’s a case of minutes rather than decades. So there’s a sort of use opportunity. Now I want to sort of caveat that by saying look, how patients enter information is going to be critical here for how we how these tools are effective. And we need many more real world studies of how patients interact with these tools in order to see how good they are. But it would at the same time be completely churlish to deny that. That’s very impressive. And we’re seeing many of those kinds of studies emerging.
Jonathan Fields: [00:26:55] Just to make sure I’m wrapping my head around this. Yeah. We can basically take an AI model and train it on all of the available information about all potential rare diseases, everything that we know, all the diagnostics. Basically anything that exists in any database anywhere research clinical information. And then the AI can tap that. So when somebody types in a series of symptoms in the right way and hopefully prompted an intelligent way that it’s able to draw on this vast database, which no one individual could possibly have to much more rapidly identify, especially rare diseases or illnesses, in a matter of potentially minutes that might have taken years or decades, or maybe even never been diagnosed by one individual.
Charlotte Blease: [00:27:49] That’s certainly the potential. And I mean, if you consider something like you take sickle cell disease, if you live in West Africa, say, a country like Nigeria, it’s a very common condition, very common genetic disease, like 1 in 50 births or something. If you move to the States, it’s 1 in 350. But then if you go to Europe, it’s less common. So it’s something it’s like one in maybe 4000 or 3500, something like that. So that becomes then classified as a rare disease in Europe. But it wouldn’t be in the States, it wouldn’t be in Nigeria. So you’ve also got these contingencies. If you’re a human doctor, it depends where you’re trained. And there’s a certain luck that comes with that as a patient when it comes to a rare illness. And this is where drawing on AI, there are many opportunities. But I also want to say how we train the AI is going to be really critical as well. So if it’s still fed, uh, data that it has omissions or biases or is only regionally representative from one area of the world, you’re going to have some of these same issues replicated. And we know that they’re going to persist in terms of biases and the mistakes. But it’s always a case, Jonathan, of sort of assessing, is it better than what we’ve got? That’s a key question. So as humans we tend to hold AI to a much higher standard than we do ourselves, and including doctors as well. So it’s an issue of saying, look, actually, what is better for patient outcomes or what’s better for offering information? And it’s going to be a trade off.
Jonathan Fields: [00:29:22] Yeah. I mean, that’s so interesting, right? Because we look at AI and we’re like, oh, but it’s not perfect. You know, it’s only 95% accurate. But then if we compare it to the typical human doing the exact same task, maybe a human is 65% accurate, you know, so so it’s like we have to kind of like make a more legitimate comparison here.
Charlotte Blease: [00:29:42] And that does tend to be what happens in the studies that we’re much more forgiving of humans. And of course, trust comes into all of that too. But certainly we do tend to hold it to a much higher bar, which is interesting, but maybe that’s something we need to grapple with that kind of bias as well.
Jonathan Fields: [00:29:59] Yeah, I think you’re seeing this in a lot of AI, just more general applications and self-driving cars. Yeah, people are looking at accident rates and things like this. And but if you compare that to the data on just human caused accidents from the limited data I’ve seen actually AI so far, the latest generation is probably a lot safer. Yeah, but we hold it to a different standard. We want it to be perfect, whereas we’re just like, oh, we’re human. We’re just good enough is good enough.
Charlotte Blease: [00:30:26] Yeah. And you know, there are I mean, in some ways it’s sort of quaint that we do that. And we could look at it and say, it’s kind of funny, or why do we do that? But another way that the bias actually does have real consequences, because it it’s sort of systemic level. If we decide that we prefer, for example, humans doing a particular task, but the error rate is much higher and it does lead to harms or even mortality. You know, this is a very serious ethical dilemma that we’ve got to inspect our own bias on that.
Jonathan Fields: [00:30:58] And we’ll be right back after a word from our sponsors. You mentioned also just the way that it’s trained. And we started talking about rare diseases and how it can be incredibly helpful there. But I would imagine even just for generalized conditions, if an AI model is able to be trained on all of the data for all available conditions, and then integrate every single new publication, every new study into it in real time, which, as you said, for one human being to digest even 2% of what comes out that’s relevant to them would take 22 hours a day. So impossible task. But an AI can do all of that for everything in real time, that it’s able to draw on a database that is just not just for rare diseases or conditions, but for literally anything that is just informed and a profoundly different level than a human being.
Charlotte Blease: [00:31:54] Yeah, I think that’s right. But again, I would come back to the issue that publications, if we take the vast ways of publications, They’re not all going to be of the same standard. There’s always going to be those omissions. There’s always going to be training biases. People tend to recruit certain demographics into clinical trials, so there can be this need to redress major biases and omissions within that diet that’s fed the machine. So I would say in that sense, what’s good about this question is we’ve got to remember that we talk about sort of machine learning and AI, but actually humans can play a huge role in deciding how we train, how we model these tools. And that’s going to be critical as well. And that’s why humans will be in the loop in some sense, because they’re going to make critical decisions and they have to we have to be thinking about that.
Jonathan Fields: [00:32:48] And as you said earlier, if in a ten year window, 40% of research that’s published ends up being reversed, you know, a decade later. But we’re relying on this data being put into AI to train it in real time. That really complicates things, because maybe then it is giving us information that a couple of years down the road is going to be shown to have been wrong.
Charlotte Blease: [00:33:09] Yeah, but it’s again, it’s no less a problem for the human.
Jonathan Fields: [00:33:13] Yeah. Right.
Charlotte Blease: [00:33:14] Right. And then there’s a really nice article that was published called. The title of the article was the answer is 17 years. What is the question? And the question is how long does it take to move from bench to clinical research to medical practice? 17 years. Wow. It’s almost a generation. So this is where AI can start to speed things up. And we may be able to adapt quicker.
Jonathan Fields: [00:33:41] So interesting. It’s like we’re it feels like we’re in such early days right now. But we’ve also had a lot of development in the years before when it comes to AI. One of the problems that we’ve talked about also is the notion of the patient experience, and a patient sharing what they need to share, feeling like they have the time to share what they need to share, and actually giving all the relevant information and symptoms and experiences so that a doctor actually can take it all in and make the best possible diagnosis and treatment recommendation. And I would imagine also patients want to be seen and heard and felt like they’re treated with dignity and given the time to do this. How does AI impact this side of things?
Charlotte Blease: [00:34:26] I have so much to say about this, it’s hard to know where to begin. So basically, I’ll start by saying patients pour their hearts out to machines. And we have known this since the 1960s. So as soon as there has been the introduction of a computer within a medical context, it actually in the same year, in 1966, a doctor called Warner Slack did it. He was a medical doctor, and I write about this in the book. He put patients in front of this sort of monster sized computer. The idea to take a kind of medical history and what he noticed was that the patient in front of the machine was just slagging it off and saying, look, you’ve asked me this already. But he was laughing and making fun and but he was also disclosing more. But the very same year, there was another computer scientist called Joseph Weizenbaum based at MIT, and he devised this sort of therapy computer program that meant that the patient was sitting, that was called Eliza, and you may have heard of it, but more people talk about Eliza than they do this other medical history taking program. But basically, his secretary, Joseph Weizenbaum’s secretary, was sort of she was interacting with this program, and it was sort of this mimicking what a psychotherapist might say. And she got so engaged with the early chatbot, if we can call it that, that she actually asked Weizenbaum, look, can you please leave the room because she was having such an intimate conversation with the computer? So we’ve known and subsequent studies have shown patients just really they feel more comfortable and they tend to say more to technology. They divulge more. For among the reasons that we said at the top of the program, which is they don’t have those cues of status that tend to inhibit or interfere with what they might be. Patients might want to say.
Jonathan Fields: [00:36:19] It’s almost like people feel much more comfortable just being open and honest. And I would imagine also because the machines not saying there’s no clock ticking with the machine saying like, we’re five minutes into our ten minute visit, can we move this along? And you’re not concerned about wasting the machine’s time, and you’re also not concerned about being judged because there’s not a human being.
Charlotte Blease: [00:36:38] All of those things. And also doctors tend to interrupt patients, and it’s very hard for them not to do that when they’re under pressure. But it’s also because, again, they’re the more dominant party in the visit. And whether somebody is sort of a higher up, they’re more likely to dominate to take the floor, to interrupt. Technology doesn’t do that. There are many positives, but nothing’s ever straightforward in life. People tend to say more. There may be more at ease, but then the other hand, they are saying more. So therefore, AI can be a very potent extractor of our most sensitive medical information. And this is where, you know, if you’ve got young people turning to chatbots, you know, they could be giving away very sensitive information, personal information to big tech. So there’s there’s there’s a whole other cluster of, of challenges, clearly that emerge with AI when it comes to the fact that people just are very much at ease with it.
Jonathan Fields: [00:37:36] I want to circle back in just a bit to sort of the ethical issues that arise with this, especially around data, the idea of somebody having this confessional effect almost with AI, I think is really fascinating. And another, maybe more nuanced element. I wonder if there’s any data on this is because so many of the symptoms that we go to our healthcare providers with can be traced back to stress, to mental health issues, to relationships, to lifestyle issues. And oftentimes we feel like we have nobody to unburden those with. We you know, we don’t want to feel like we’re complaining. We maybe we don’t feel like we were comfortable or have easy access to mental health professionals. But I wonder if just the fact that we can we have this non-judgmental thing in front of us where we can just fully unburden and be honest and open and share whatever we need to share. If that alone has an effect of relieving stress anxiety, which then has a trickle down effect on physiological symptomology.
Charlotte Blease: [00:38:47] That’s a great point. And it could be that. I mean, one of the things I’ve researched before is the placebo effect, and the placebo effect is quite a potent effect where if you expect to feel better, it actually can influence sort of dial down experiences of pain. It can mitigate depression to some extent and anxiety. It’s not a cure all, but it certainly can have a significant effect on on certain symptoms. And what’s interesting about AI is it tends to be it’s one of the challenges with it. It current models like ChatGPT tend to be quite obsequious. They tend to be people pleasing in their biases. So it may be and we’ve all I mean, anyone who’s played around with these tools can see just how utterly polite and unfazed they are compared to humans. And sometimes, if I’m quite abrupt with it, I feel slightly guilty afterwards.
Jonathan Fields: [00:39:43] It’s like you type in, oh, I’m so sorry. Yeah.
Charlotte Blease: [00:39:46] It’s like, yeah, so you can’t help but treat it a bit like a human with human attributes. But certainly in response, it’s very steady. It doesn’t get it looks like it’s not being, you know, rattled by anything. And of course, it can’t be. It’s just AI. But again, we’ve tried to personalize it, but that on top of the fact. So a range of compassionate responses and on top of the fact it’s always there and it does tend to give people pleasing responses. I suspect what you’ve said is right, that it may well have a positive physiological effect on some patients. We don’t tend to talk about that that much right now, because there’s obviously other concerns with harms with these tools, but I have written about this as well. But the fact that it actually might be a vehicle for elevating placebo effects in some cases, but I don’t know of any studies on that so far.
Jonathan Fields: [00:40:41] Yeah. You just mentioned the the concern with harms. Talk to me about that side a bit too.
Charlotte Blease: [00:40:45] Yeah. Recently there’s been a huge spate of articles and attention given to this issue of younger people turning to these chatbots for counselling and advice, and even in some cases, asking for. And that in itself is worrying. But in terms of asking for advice about how to commit suicide. So in which case you can get past guardrails with these tools and it may actually offer instructions to self-harm. So it could be a case of I’m, you know, you don’t directly ask it, but you say I’m writing a story in school and I want to have this fictional character is looking for what would be the right. So there’s ways people can sort of put the right context in, in their prompts to get around these kinds of challenges. And then they can get advice about and we know of cases where this has happened. So the issue of harm here, especially when people, younger people in America, by the way, and younger people are glued to their devices. But I saw Pew Research a couple of years ago, about 50% of young people in America are almost permanently on a device. I mean, it’s frightening. So the dependency there, too, with some of these chatbots for advice, there’s certainly openings for concern, not just on the quality of what is given, but seeking out advice on these tools, giving it to you whenever that’s not what you need. It’s professional help, or in this case, still professional help until we can find ways to get around or guardrails for these kinds of tools. So we need much more conversation about this kind of uptake. Younger people tend to be faster adopters of technology, and we’ve got to find ways to offer parents advice, young people, advice about when it’s appropriate to use these tools.
Jonathan Fields: [00:42:43] And I think that brings us also around nicely to this conversation that you referenced earlier, which is the ultimate solution is very likely not one or the other. It’s the integration of humans with AI. And there’s been interesting research on this also. And I think the the study that got the buzz recently was this study that showed that, you know, they looked at physicians working alone, AI working alone, and then AI plus physicians. And raise a lot of eyebrows because the outcome was that actually, you know, people thought, well, AI plus physician would be the ultimate you get the benefit of both. And it actually the outcomes performed worse than AI alone. But it’s more nuanced than this. So take me into this.
Charlotte Blease: [00:43:29] It’s much more nuanced. And actually, this is where I think we there’s been a kind of a conversation within medical schools where discussion about AI has cropped up, and I’ve been kind of hanging around medical schools for about 15 years now. But basically the idea is, look, folks, don’t worry. Technology is coming, but we’re all going to have the best of of all worlds here because the technology will be great. It will take some of the burdens off, and doctors and AI will kind of work well together. That has always been a mistake, because there’s been a sort of siloed. And this is where medicine sometimes sticks on its own track. It’s very I mean, it’s like typically with academic fields, people stick with what they know. But if you take a step back and you look at research in other fields, like psychology, again, for many years we’ve known that’s just not how humans respond to technology, and particularly if you’re an expert. So again, for a number of years, actually going back to the 1950s, it’s been known that whenever you take a sort of domain expert and you ask them to reflect on what an algorithm is saying. People domain experts tend to hold their noses to what the AI says. It’s what’s called algorithmic aversion. On the other hand, laypeople who don’t have any expertise aren’t really squaring up to the the algorithmic output, and they tend to defer to it more. So you’ve got this sort of algorithmic aversion, which helps to explain.
Charlotte Blease: [00:45:06] It’s sort of an overconfidence, too, on the part of the expert because they’re like, well, I’ve been trained in this. You know, I went to medical school for ten years and to hell with this technology. So it’s an implicit kind of instinct that you’re an expert. You know something. Therefore you’re not necessarily going to listen. That tends to hurt the accuracy of the AI. And that’s a bad thing where AI could be beneficial. So what we’ve seen is a replication of that within these recent studies where if you just simply leave the AI alone, it’s now gotten so good that you’re better leaving it alone in some cases. And that actually might be what the future of medicine looks like. And that’s what I basically predict and anticipate in the book. The technology is the worst it’s ever going to be. And then we have to ask the question, what roles do humans have here? What is the identity of a medical doctor? Now? What is the point of the medical doctor when it comes to diagnostics? Or what’s the right relationship? If we’ve got a human in the loop, how do they work? What kind of expertise do they need in order to be humble enough to defer to the AI when that’s necessary, but also to ensure that there’s, you know, the right kinds of accountability as well. So it’s really much more complicated. And I’m glad I definitely welcome that question.
Jonathan Fields: [00:46:22] Yeah. And it feels like we don’t really have an answer to that right now.
Charlotte Blease: [00:46:26] I think what the answer is right now is we’ve got a profession that’s in flux and is going to many of these white collar professions are going to be in flux for a sustained period because, as I argue, the trajectory of these technologies is improving and it’s an affront to to our sensibilities to see them at sort of start to encroach in expert and knowledge based domains. But we’ve also got to remember that when AI or robotics came for factory assembly line workers, for many blue collar professions, sort of a case of, well, those are those were efficiencies and we can now have greater production or whatever. And I think what’s tended to happen with white collar professions is including medicine. There is this a sort of a sense of, of a right to practice. There’s a sort of a prestige that comes with it and there’s this status and that’s going to be very difficult to give up. And so I think that there’s going to be for some years to come, a kind of a, a reckoning with. And it’ll ultimately be a case of, are there going to be alternative ways to get health care if there’s not some sort of a major shift in how current systems are working? I think we may see that there’s going to be, uh, sort of big tech will emerge with different kinds of models, and then we’re going to have a case of patients voting with their feet because people will see what works for them.
Jonathan Fields: [00:47:56] And I think that also brings up the issue of access. You know, so right now, if you are in a position to have ready access to a wide variety of highly skilled providers, and you have the ability to have insurance and that there’s coverage, you know, that’s one category of people. But then if you’re in a healthcare desert or if you’re in a country or a place where it’s really hard to get access to super high quality health care that has the latest information. And again, and there isn’t, you know, like very good coverage, then maybe this also has the effect of leveling the playing field to a certain extent. And long term at least.
Charlotte Blease: [00:48:34] It could well be the case. It’s the readiness. I mean, of course there’s going to be an issue there with digital literacy. So not everybody who has access to these tools, not everybody has, even if they know how to use the tools or have a digital device, they may suffer from data poverty. So there’s issues abroad, lack of broadband access. So there’s going to be issues with digital divides as well. Having said that, more and more people are getting online globally, something like nearly 6 in 10 of the world’s population has an internet enabled mobile device. So it is going to be a case of people will increasingly have more ready access to versions of expertise that they just didn’t have before. And the key question that I sort of rehearse repeatedly in the book is the idea that when we’re having discussions about how good AI is, as you said, it’s not against Concierge care, where people have the very best of the very best, very oftentimes across the world, even in rich countries, not having access to any healthcare at all and just wanting to sort of triage. Even if you can access a doctor’s like, well, is it worth going? What could this possibly be getting on on to? Some of these tools could assist with that. But also it may increasingly assist with sort of a second opinion, not just the first opinion but a second opinion service with doctors as well. Again, that’s an issue where patients are very reticent to be seen to question their doctor and where these tools can sort of offer supplement and offering guidance and advice.
Jonathan Fields: [00:50:10] I think a lot of us probably still feel, you know, we may actually go to our favorite chatbot right now to just if we’re feeling something going on and type it in so we can get a quick answer. But we still want to know we have the person that we can go to available to us. We don’t. Nobody wants doctors or healthcare providers to go away. I think that’s probably pretty commonly agreed to. But there is this yes end thing where it’s like, what is the role of the doctor and what will the relationship be between the doctor, the AI and the patient moving forward?
Charlotte Blease: [00:50:41] Yes. And it’s really interesting because nobody now wants that. But what I mean, it’s how we’re used to receiving care. And so surveys show people they still do want they’re happy for AI. Most people are now increasingly happy for AI to be used by doctors. But the doctors, the overseer of it now again, that I think will change in the years to come when the technology gets better because it’s what we’re used to and future generations may feel a bit queasy about it. They may say, My God, how do people go to a human doctor? How did they manage to do so well? And again, I’m not criticizing, but that I think if future generations will see the world, they’ll experience it very differently than we do and may look back as this is an artifact of medical history that we consulted with doc does not mean that there may not that there won’t be humans necessarily involved, but the idea of the doctor as this kind of one man band is sort of godlike figure. I think that that will change, and we’re going to need new medical idols.
Jonathan Fields: [00:51:42] Yeah, it’s such a great point. Like, this is how we feel now. But a decade from now, a generation from now, they may look back at like today and say, oh, how uninformed or how silly that time was.
Charlotte Blease: [00:51:54] Possibly, and ask you, if you look at the history of medicine, you see that a lot of innovations were resisted. So antiseptics, hand washing, anesthesia is a big one. Anesthesia is a really interesting one because doctors actually resisted it because they had honed their skills to work very quickly. So even though they could literally hear the cries of pain of patients, it was a case of. But we have expertise that we’ve learned and we’ve learned to do this very fast. So it was seen as undercutting that expertise. But yes, there’s been resistance to a lot of innovation penicillin vaccinations, clinical trials where there was resistance and where there was sort of delayed progress. And it may well be that AI is could be part of that, where patients again, maybe the ones and sort of more external figures to the culture of medicine may take up these tools in ways that doctors may resist for a variety of reasons. Some of them very good reasons I will add to because, again, change is hard when you’re under pressure. But that’s sort of the point. Humans just find it hard to adapt.
Jonathan Fields: [00:53:03] We do. I’m raising my hand here also. Um, you know, I think one of the, one of the things that I’ve heard also as, as a rebuttal is, well, but AI hallucinates all the time and I think, well, yeah, but if you look at the amount that it hallucinated two years ago versus today, it’s dramatically different. And this is just a matter of time before we kind of train the models that they’re better and better and better And again, we’re comparing that against perfection rather than against the typical well-trained individual. And we have to make a real comparison here, not against absolute perfection.
Charlotte Blease: [00:53:36] I completely agree. I think the real worry there is that these kinds of arguments tend to be, as you say, a kind of a vacuum. And it’s important to critique the AI. Absolutely. We’ve got to pay attention to that, and we’ve got to be vigilant to that. In a way, we’ve got to be completely hypercritical across the board, which is something I say that we’re almost romanticizing how good humans are, and we’re not contextualizing it when we discuss the AI. So we’ve got to be really careful about saying, first of all, what’s the AI for? And to have that discussion, you’ve got to say, well, it might be for improving these particular areas where we’re just not as good.
Jonathan Fields: [00:54:13] Yeah. One last question here before we wrap up. And this is the data. There’s a data. There’s the training set. But increasingly as patients Patient’s own data, gets fed into these models and becomes a part of the training set. One of the concerns here is, well, if our data literally every time we show up at a healthcare provider, everything we say and do and everything that the provider thinks and all of our testing is fed in to train a bigger model, which is beneficial because then everybody gets the benefit of everybody else’s input. Should we be freaked out about that?
Charlotte Blease: [00:54:48] I think we should have more conversation about it. I think these are societal level conversations about what are those sort of trade offs? What are we giving away? And as you say, for many, it may be a case of we’re happy to give away private or sensitive medical information if it makes care better. So it benefits us and it benefits other people. But then the wider issue may be one of confidentiality. So if you’re privacy may have gone in some sense, but it’s a case of keeping your information more confidential from other parties who could be exploiting it through a tech big tech pipeline. And that’s where people get nervous. I think particularly nervous because it’s a case of future ramifications of giving it away. Am I going to be exploited when it comes to some future version of healthcare coverage, employment, policing, all kinds of areas of our lives that Big Tech can’t already is exploiting us on?
Jonathan Fields: [00:55:51] Yeah, and I think they’re very real concerns. And we’re early in the conversation. You know, it’s just and I’m so interested and curious to see how it all evolves and it’s happening so quickly. But as much as the concerns are there, at least for me, the level of excitement and possibility is really is so much higher. Same for you or.
Charlotte Blease: [00:56:11] Yeah, definitely. I tend to be an optimist, but with a spike of ice in my heart about it all because I would say it’s easy to be a cynic, but actually it’s harder to remain. It’s more constructive to be an optimist. But that doesn’t mean you’re not paying attention to all the challenges. We’ve got to pay attention to the challenges, but we’ve also got to work hard to overcome them. And I think there are many benefits for humankind through the use of AI. But we also have to confront very big issues as well environmental costs, privacy, what kind of society we want to live in, and we’ve got to be constructive about working around all of those problems if we want to avail of these tools.
Jonathan Fields: [00:56:52] Completely agree. Feels like a good place for us to come full circle. So I always wrap every conversation here on Good Life Project. with the same question. And that is if I offer up the phrase to live a good life, what comes up?
Charlotte Blease: [00:57:04] Balance. I would say a nice balance between doing meaningful work. And I think the world of work and the nature of work is going to change a lot with AI, but it’s a balance between friendship and living well and, um, not missing the point of life.
Jonathan Fields: [00:57:21] Mm. Thank you. Hey, before you leave a quick reminder, this conversation is part of our Future of Medicine series. Every Monday through December, we’re exploring breakthrough treatments, diagnostics, and technologies, transforming healthcare from cancer and heart disease to aging, pain management and beyond. If you found today’s conversation valuable, you won’t want to miss a single episode in this series, and next week’s conversation is with Doctor Adeel Khan, where we’ll explore how groundbreaking treatments like a special kind of stem cell called muse cells and peptides are revolutionizing medicine, offering hope for conditions ranging from chronic joint pain to neurodegenerative disease. And we’ll dive deep into why these therapies work, who they’re right for, what to look out for, and what the future holds as these treatments become more accessible. Be sure to follow Good Life Project. wherever you listen to podcasts to catch every conversation. Thanks for listening. See you next time. This episode of Good Life Project was produced by executive producers Lindsey Fox and me, Jonathan Fields. Editing help by, Alejandro Ramirez and Troy Young. Kristoffer Carter crafted our theme music and of course, if you haven’t already done so, please go ahead and follow Good Life Project in your favorite listening app or on YouTube too. If you found this conversation interesting or valuable and inspiring, chances are you did because you’re still listening here, do me a personal favor, a seven second favor, and share it with just one person. I mean, if you want to share it with more, that’s awesome too, but just one person even then, invite them to talk with you about what you’ve both discovered to reconnect and explore ideas that really matter, because that’s how we all come alive together. Until next time, I’m Jonathan Fields, signing off for Good Life Project.