00:10:11 - 05:10:11
Jerrod Bailey 00:02
Welcome, everybody to reimagining healthcare, a new dialogue with risk and patient safety leaders, presented by Medplace. We're excited to bring you conversations with top risk and patient safety thought leaders from organizations around the country. So please subscribe to get the latest news and content. And if you found this valuable. This episode in particular, feel free to share it, we want to get the conversation going with your colleagues, nationally. And if you're interested in participating as a guest, by all means, email us at speakers@Medplace.com. Now I'm Jerrod Bailey. I'm the CEO of Medplace I'm going to play host today you guys have heard me before, and I'm excited to introduce you to my guests today. I'm joined by Matt Karis. Hey, Matt, how are you? Matt is a, let's see if I get this right. Is a civil litigation attorney who is active in National and State Bar Association's and risk management organizations. Matt specializes in litigation issues; with and here's the fun part, EMR and AI in healthcare.
Matt Keris 01:22
That what about what about
Jerrod Bailey 01:23
fascinating that you that you found your way there cutting edge stuff, and really promising stuff AI has been? Well, it has made a lot of promises over the years as it is somebody that has been able to deliver on summon hasn't. But it's opening up this whole new area of consideration for risk managers. it's obviously pushing the envelope in things like the quality of consistency and delivery of health care. But I think what we want and what our audience wants to do is we want to like stop and like hit the hit the timeout and ask ourselves like, what are the implications for us from a risk perspective, if we should be using this stuff? And like clinical practice things like that?
Matt Keris 02:09
Yeah, well, it's where we're there already using it. But we're at a point now, where in the adoption of it, where we can actually start thinking, Okay, let's project out down the road, how these things are going to how the use of AI is going to impact our cases in our care. I've been talking about these things for the past year or so. And I think you saw me speak someplace else. And that's why I'm here. It is the money that is being invested in AI right now. If you're not aware of it, you should be it's quite daunting when you just see all the money if stat Health Tech is a clearinghouse I subscribe to and every day I get an email, mergers, acquisitions, capital invested in AI, it's amazing how much is being invested in, it's coming in, there's no way around it. And our risk providers need to be knowledgeable of it now to start planning ahead, in terms of best practices, and how to respond to some requests and how to go about some of the issues we're going to see in litigation.
Jerrod Bailey 03:15
Yeah, I mean, it's staggering. Yeah, I'm in I've been in the venture world for the past couple of decades. And AI is it's the, the new hot thing, it's not the new hot, it's been around for a while, it's always attract a lot of investment. It's just really accelerating now, because it is starting to deliver on some of those early promises, right. And so we're seeing it be effective. But speaking as a technology person, the technology people behind the AI companies aren't necessarily thinking about liability. They're not thinking about what if their AI is wrong 1% of 1% of the time, and they're not necessarily in the crosshairs of that risk if they should make a mistake, right. But, but our hospitals and our risk managers and our captives in our in our carriers are in those crosshairs. I think we need to be a little bit more sober than, than the tech guys. On exactly what AI means for us from a risk perspective. So can you help sort of unpack? What I do, I don't think we need to go into detail with AI, although if you feel like there's something foundational that would help feel free to but I'd like to talk about like, where is it showing up today? So where's our audience? Where should they be looking and sort of have their radar up in terms of where AI might be showing up?
Matt Keris 04:36
It is all over the place right now. There's various forms of it. And there's different types of AI and actually, I have something I can put up on the screen to, to help lead the discussion on that. Perfect, and I'm going a lot
Jerrod Bailey 04:47
of our users watch the video. Some of them will do audio, so if you need to paint any pictures with words, feel free to do that, too. Yeah.
Matt Keris 04:54
Hold on a second. We're going to hit the share screen. and go here
Jerrod Bailey 05:07
do we do not there yet? I don't see. All right, hold on. Isn't it funny that new skill sets we have to learn now that COVID? Yeah. All right.
Matt Keris 05:18
How about a success? All right. Like I said, there are there are areas where we see it already. I mean, there's AI in the smart devices, the smartwatches, we all see that predicting cardiac events, predicting falls, it's there already. So we're seeing that already. There's a big push for that.
And there's some issues associated with that, as health systems, adopt these things, incorporate them in treating their patients they have to be mindful of other things, as well, not only should we use these devices, but do our do our patients know how to use them. What is their capability to know, particularly the older patients who may be not who may be less tech savvy, how's their internet, how's their internet connection, do they have good Wi Fi, how's their high-speed internet, those types of things, healthcare systems are going to need to assess as they adopt these things moving forward,
You're seeing a lot of AI and administrative tasks like automated scheduling, and dictation, there's state-of-the-art technology for physicians to dictate and no matter how good these AI systems get, there's always going to be room for error. Like if the microphone on the documents not next to the to their mouse, maybe it's up here, maybe it's over here, it may not pick up things, it'll pick up background noise, it may not pick up accents, it may misinterpret medical terminology or, or things of that nature. So it's not 100%. It's getting better every year.
And you were saying we've had AI for a number of years where the best success has been so far, is with what's called CID, the computer aided diagnosis. That's the radiology or the pathology, you're seeing in some instances where the AI systems are doing a better job than the human reviewers and looking at the at the images, again, whether it be traditional imaging pathology, and that's where the best success both where we're seeing where I see this going and where I'm going to use the real scientific term, craziest, craziest development is, is with clinical decision support. That's the Amazon experience that you see you receive when you're shopping, the computer, or the AI makes recommendations to you on what to do we see that we buy a backpack on Amazon, and they're suggesting hiking shoes for us. Well, that type of process is now being integrated in our electronic medical record systems.
And this is where I think it's very revolutionary in the sense that the chart is no longer just someplace to document. Now the chart is actually, if you think about it may be an actor in a participant in the relationship that was previously limited to the physician and patient. Now, depending on how far advanced we get, and the type of clinical decision support system you have, you may have the physician, the patient, now the chart with the clinical decision support, playing a role and in the care. And this is revolutionary.
And this is where I think we really need to think hard from a litigation standpoint, how is are these interactions going to look on the record? Five years from now when we get to the case where there's a lawsuit with AI? How, how am I going to know where the AI clinical decision support portion is of the record is printed? In the interactions between the physician and the patient with the AI recommendations, how is that going to be documented now is the time where health care providers really on this particular area, really have an opportunity to make a good difference. And think about how those things should be done. Because it's these cases are driven still, by the record, the record of a good record will help defend these cases, if it's a confusing record, we're not quite certain of what's AI worthy, or if AI is involved in how it's involved and how the physician interacts with it. It could make an impact on the defenses of these cases. So it's
Jerrod Bailey 09:36
fascinating. Yeah, I mean, it's it I like the idea of thinking of it as another actor involved. Because that actor is by the way, it's not just a machine it there's a bunch of humans that contributed a bunch of data to that actor and they contributed more importantly to the to the algorithm and the decision making criteria that that AI uses. Right. And so I can Imagine a scenario where you've got an AI. And it has not benefited from some new piece of information that may be pertinent to current standard of care. And so it's not factoring that in or somehow in its algorithm process. It's, it's, it ends up making a recommendation. And does it make that recommendation 100 times before somebody catches it fixes re educates it. Right? And what are the implications? Because there's one thing about one physician doing their work, and making the same mistake over the course of years, we've seen it happen, it's unfortunate, but it happens all the time, in AI has the ability potentially, to make those mistakes at scale. And that's a significant whole new echelon of risk that I think we need to be prepared for it, right?
Matt Keris 10:54
Absolutely, you can have a whole class of patients that are impacted by AI, it's whether they're making recommendations that physicians are adopting. And it's reasonable under the circumstances to adopt what the AI is showing, there could be good percentages, that they that they, these decisions are being made upon. But there are problems with these with AI, there could be errors, that one of the things that people need to know about is that there has to be a routine retrospective analysis of what's going on with the clinical decision support systems. There's numerous reports already, where although the AI was very good at the outset, retrospectively they, they look at things and there's bias that can play a role in wrong decision making. There's even a report that, again, through MIT, this was reported in stat health, MIT reported that as AI systems and these algorithms get older, they get less accurate. And they could be giving in accurate recommendations for the physicians to follow moving forward. So if it's not caught, it actually may be AI may cause more problems and good if we're not on top of it, and looking at evaluating what it what it is suggesting to us. And the biggest reason why we have to do retrospective analysis is because AI doesn't tell you that why it doesn't tell you why it's recommending something or why it came to that conclusion. So that part of it is rather scary and can make our cases much more complex to explain to the jury and the judges and everyone else evaluating these cases.
Jerrod Bailey 12:34
Well, I mean, that's just another technology thing that I know, it's not. So AI ends up having some method for deciding on something. But more often than not, we don't know what that method was that it followed, right? Can you can do AI in a way that it can actually even generate its thinking process? Right? Are we doing that? Are we seeing that in clinical practice? I don't I don't think I have not seen it in clinical practice. It might exist. But I imagine that if there's a lawsuit five years from now, like we got to produce something right, in, right, what do we have to produce? From the AI? What do you think we should be thinking about now like table stakes, what do we have to produce?
Matt Keris 13:15
Or what should we be producing? It's going to be very similar now. But more complex what healthcare professionals are dealing with in terms of the audit trail discovery. A lot of times, we all know this, it's standard practice, point, as law firms are seeking the audit trail to find the aha moment where there's a records alteration after the fact, the audit trail was so that, and part of the problem with a lot of healthcare systems is that they are buying a product that is created for them. And either it's an evolving system, and no one really has their arms around it, they really don't know how to get some information people in internally move on.
And then you may even, for example, get a new product. And the old EMR vendor doesn't want anything to do with the healthcare provider. They don't want to get involved in the litigation, if there's a real issue, they don't want to have to turn over their proprietary information, all those types of things, you could see that with AI, if it's truly an AI error, you can see similar to the audit trail, expensive discovery, I want to go in, I want the production of the of the audit trail, I want to I want to know all versions that you've used, I want to know the upgrades and amendments, and you may not be able to do that as a health care provider because it's not yours. It is someone else's property. So and you may want to try to bring them in but good luck on that. Why would they why would they voluntarily try to get back in to talk about if it's truly a proprietary issue or their error? Why would they want to actively get involved in litigation to basically talk or expose is one of their problems with their products. So I can definitely see as thorough a discovery into the AI when it was adopted who the vendor is retrospective analysis, as we're seeing right now with audit trail,
Jerrod Bailey 15:12
You have me thinking that it is probably hard to predict everything that AI is going to touch from a risk perspective. But this short-term solution may be in the contracts with these vendors that we're signing, right? Right now, from a commercial contract perspective, that that will help me mitigate my risk down the road, right,
Matt Keris 15:39
You need to look at that contract. And we talked about this offline before, and those contracts come up routinely, you need to look at that contract and look to see if there's any hold harmless or indemnification language in there. If there is that benefit, if to the benefit of the AI vendor, you really need to make a point of eliminating that language or creating some type of agreement that if there's truly an issue regarding AI, that leads to issues down the road or errors, that it's the AI vendor, not you who are dealing with it are going to have to defend it.
And if you have that ability to look at your contract, renew it and negotiate that part, I highly, highly suggest to do that moving forward. It may be too late or for systems you have already. I know that there again, there's some systems out there that people are adopting AI with existing EMR platforms. So you may you, and I don't know how long this contract comes last. But you really need to do a deep dive on that and see, see where it goes. I'm not a contract lawyer, I probably fall asleep if you asked me to look at the contract. But yeah, I,
Jerrod Bailey 16:47
I do a lot of contract work. And we do service scrim so that our customers and I think anyone who's looking at AI, one of the tactics you can use is you can commit to not more than one year contracts, or at least when the renewal comes up with a contract, it's annually. And that way you can really assess as you go, and then you have some room to sort of renegotiate terms and things like that as that those renewals come up, because it is the early days, there is a lot of unknowns, and nobody on either side can really predict it. So it's pretty good ground to stand on to say, look we'll do a maximum one year agreement, and then we'll relook at it. And I can imagine building into some of these contracts things like what type of information does that AI company need to produce? If there is a subpoena? There's a malpractice lawsuit, what are they going to agree ahead of time to be able to produce that may be certain details on their algorithm, but not the whole thing. It may be details on that particular incident are they even logging what happened in order to even generate the data that you need, right. So, knowing how they're logging and having them commit to that within the contract, probably just some good practices for anybody right now.
Matt Keris 18:07
Absolutely get in there and try to square that away, put it in clear contract language if you can. Because if there's not that language, right now, I feel that it's going to be the hospital or the physician who are going to be primarily responsible for bad interactions with the AI. And then it brings into a whole host of other like new potential legal defenses for the hospital or the doctor. I mean, for the doctor, if he sees a patient, and there's a clinical decision support type recommendation, and he adopts it. And after the fact, it's, we learned in retrospect, it's an error. Well, one of the defenses the doctor is going to have to say is, it was reasonable for me to go this course, based on what the AI shared with me that that's a new potential defense. Same, same for the health system. But then, as physicians health systems rely on that they use that as a part of the defense, that's when the plaintiffs bar is going to go look at what the what the algorithm is, is indicating, and do that deep dive discovery to start going after their, their information,
Jerrod Bailey 19:15
cache, it's interesting. And then you've got these doctors that are either agreeing with the AI in the implications of that, or they're, they're disagreeing with the AI and the implications of that. And then like, knowing where do you fall from a risk perspective? Am I exposing my exposing myself on either side of that decision? And there is? I don't know.
Matt Keris 19:38
Yeah, there's a lot. It's a great question. You're going to have people who aren't going to accept it. So let's just be they're not going to set the AI recommendation. So let's, let's think from a player's perspective, how can they use them in their case? Well, number one, they are going to want to know, why isn't this physician using it? Is it generational? Is it from a prior bad experience, they just don't trust it. So they're going to explore that if it's because of a prior bad episode, then they are going to want to explore that further detail to get in the mindset of the physician.
But conversely, in retrospect, if the doctor declined to take the recommendation of the AI, but in retrospect, what the AI was saying was correct, then there's some real issues, if they're just declining it because they just don't want it, they don't bother, then there could be an overall corporate negligence argument saying, “hey, health system, you have physicians that are knowingly disregarding, just as a matter, of course, your AI, you're creating a dangerous situation or dangerous environment for your patients. Here's a tool that your physician failed to recognize your user consider, based on whatever reason, you need to do a better job of monitoring your, your physicians' interactions with the AI, and make sure they're at least considering them”.
So that's how lawyers are going to can spin that that.
And conversely, if doctors just rubber stamp everything, then the question is going to be who's practicing here? one of the things that we have physicians need to remember is that AI will augment it cannot replace good health care and judgment. If physicians are just taking the recommendations and just applying them hope, there could be criticisms out there, essentially practicing medicine defensively, in ordering way too many tests, and over treating this patient. So it's a fine line.
One last thing, too, if the physicians declined the AI recommendation, let's say there's a legitimate recommendation, the big picture, they decline it. I would love and I don't know if healthcare systems are thinking about this now, I think it'd be very helpful for a part in the chart, part of the template where the physician explains his rationale for declining the AI. That information is on why it was declined, if it's documented, simultaneous to the care is much more strong, four or five years down the road than in a deposition where the doctor doesn't remember, or is trying to think of a reason why it declined the AI by so that is again someplace now where healthcare providers should be thinking about is how to document physician clinical decision support interactions, particularly if they're if there's a rejection of this clinical decision support.
Jerrod Bailey 22:42
That's interesting, that's a great, that's a great little decision flag for everybody to be aware of that's dealing in AI is, is when there's a rejection, let's just make sure we document it. And frankly, the AI companies that are in there that are that are implementing these systems and in teaching their AIS, they should really be proactive with some of this advice as well. Really makes me it makes me wonder, is there any kind of governing body? Or Is anybody organizing around this topic yet? In healthcare? Have you seen?
Matt Keris 23:15
No, this is so can you? Can you go build
Jerrod Bailey 23:18
that? That would be a great thing, I think, yeah, well, hey, you can exchange notes and best practices for implementing AI with risk in mind.
Matt Keris 23:27
That would be a great I mean, going. I'm an active member of ASHRM the American Society of health risk management, they were the ones who actually first got me in to write a journal article about this. But a clearinghouse or some type of place where healthcare providers can share this information makes a lot of sense. I mean, the money and the speed of which AI is coming at a fast pace. And there's, although there's some FDA oversight, it's not very strong. And because they want to encourage this, so these AI related issues are going to come at a fast and furious rate. And if we need to healthcare providers definitely need to talk and share amongst themselves, what the good where their successes have been, where their failures have been sure that so we can avoid same problems come up with over and over again. So
Jerrod Bailey 24:23
maybe it is a lot of money, and it's kind of like a steamroller is that we think that there's a lot of responsibility behind that money. I promise you there's not they've been in the venture world long enough.
Matt Keris 24:34
And just like electronic medical records weren't really considered with the litigation in mind. I mean, think about this for all the money, all the money that's been invested in electronic medical records and how, how widespread I mean, everyone's complaint now. You know how difficult it is to defend a medical malpractice case when you print the EMR, which we do in most of our cases. It's atrocious, it's expensive. And I've seen
Jerrod Bailey 24:59
it it's Like, you'd have to know how to read code just to just to understand who it what it is, really,
Matt Keris 25:04
you have to flip the five different areas to get a vital sign, or an exam at a particular time. So they never these, the AI vendors, the EMR vendors never thought of litigation when they prepare these things. I wish, I mean, now's the time where we can at least maybe get in a little bit to the tail end with AI. But it would really be beneficial to have the litigation aspect of it in mind, when you're talking with your, with your vendor, and on how these things can play out in what their, what their involvement will be. How if there's a request for the information, how to go about providing it, just a whole host of issues. And, of course, our risk managers are the jack of all trades, they usually get the assignments when the system doesn't have anyone else or with a with a with a title to it. So just add another responsibility to our hospital and health system risk managers to do. They're doing so much already, and overstretched. And so this is just unfortunate for them, this is just another issue that they're going to be dealing with,
Jerrod Bailey 26:09
well, I hope maybe we can go nudge ASHRM, if they haven't done it already to create at least like a working group of those interested in in AI and, and we can get it may not be the risk manager and all of those, but if we can get the person who's really kind of championing that, that product and that implementation at their hospital to, to say as part of my being responsible with this new amazing technology that's going to revolutionize our care delivery, and it really will. But as part of my responsibility and implementing that, I'm going to also look at the risk side and make sure it's accounted for and make sure I'm at the table with my peers, as we're talking about how these things have been going. And where we see the factors of risk, I think that would just be a really, really great thing to see. I've seen it in other industries, they just haven't, at least I'm not aware of it in this industry is has come up yet.
Matt Keris 27:01
Right? Yeah, we bring them to the table, bring everyone together at this point, it's going to require more collaboration, risk it medical records, everybody,
Jerrod Bailey 27:11
you know, we're dealing with AI right now, just in Medplace. So we do a lot of taking that regurgitated, massive EMR output, and organizing it and making it streamlined for physicians and nurses around the country to be able to do chart reviews and case reviews and things like that. And it's a very manual process, it's humans that currently take that in, you have to have a clinical background, and you have to create tab PDFs, and all these things. But we've been experimenting with AI for the last year and a half. And at first, it was rough. And now it's getting better. Now it's getting even better. And now it can it can I can take 10,000 patients of records, put them into the machine and it can spit out a pretty accurate I would say 93%, accurate, organized, record. That's amazing, right? But what did that machine miss in all of that, the needle in the haystack stuff, that sort of thing. So I still have to have a human comb through it. But the human might be able to do 10 times as many projects at the same time. So that's sort of the immediate promise of AI to me is it's not it's not a magic bullet. It's not going to solve all of our problems. But it's come, it's come along far enough that it's starting to deliver on some of those promises. And I think we just need to watch it and just be aware of where at risk is.
Matt Keris 28:35
Absolutely, I think it's going to electronic medical records were promised to improve care. That didn't happen. We know that in retrospect, but truly, I see AI as the evolution of the electronic medical record. And once we get this in, it's going to take some while there is going to be some successes and some failures. But truly, if this gets rolling, the way they projected it is going to improve care is going to get better claims that go down shorter hospitalizations, less costs, health care costs, which benefits everybody. However there are still going to be mistakes. There will be less claims, but the claims will be more sophisticated and probably even more expensive to defend. But the offset is there should be in theory, less cases. So it will happen. It will get better, but it's going to take some time. We're transitioning to this and but we'll get through it.
Jerrod Bailey 29:31
That's great. Well, man, I was going to ask for parting advice. But that sounded like a pretty good sound like a pretty good summary of what we need to be thinking about anything else you want to you want to add before?
Matt Keris 29:41
No other than we're in uncharted territory, again, like we were with the electronic medical record, but we've learned some things about the introduction of electronic medical record, and that is we need to get people at the table. We need to get these vendors in there. So in talking about these things, in advance, we're still at a point early on in this that we can make a difference. So let's continue to do that work together, share our successes or failures, and do the best we can because it's coming, it's inevitable, and it's going to improve it is going to improve everything but not without its trials and tribulations.
Jerrod Bailey 30:19
We do have to break a few eggs to make an omelet here in the tech world through and it's just everybody's job to be aware of it. Matt, how do people find you get a hold of you? I'll definitely link to it in the show notes. But how do we find you?
Matt Keris 30:34
Very easily, you can Google my name Matt Keris. My email is M is Matthew, P as in Peter, my last name Karis K E R I S, and I'm a shareholder of Marshall Dennehey. Warner, Coleman and Gagan, which is where we've started to Marshall, Denny, but it's my domain name is mdwcg.com. Or you can look up me on LinkedIn, and reach me on LinkedIn as well. With Matt Keris.
Jerrod Bailey 30:55
I think you're I think you're Matthew Keris on LinkedIn. There you go. Thanks. I'll link a link to all those things in the show notes. But Matt, this is super interesting topic. I mean, we're, we're bleeding edge on this AI stuff in some cases. But I really appreciate you starting the conversation with everyone so that we can start to be a little bit more sober as we have the machines take over the world.
Matt Keris 31:21
Hopefully not, but thank you. I want to be a part of the ride too. So keep me in mind and it was great work talking with you, Jerrod. I'm sure we'll see each other soon.
Jerrod Bailey 31:28
Perfect. Well, for everyone else, thank you for listening to reimagining healthcare and new dialogue and risk and patient safety leaders podcast, subscribe and share if you found it valuable. And if you'd like to participate again, as a guest, just email us at firstname.lastname@example.org. And yeah, follow Matt on and connect with him on LinkedIn. And take a look at some of the other stuff he's doing. Matt, you're on a major speaking tour. It seems like every time I see you, you're beating the drum. So thanks for doing the good work.
Matt Keris 31:57
Gotta get the word I call. We're here. Gotta get the word out.
Jerrod Bailey 32:00
That's right. All right, Matt, good to talk to you.
Matt Keris 32:02
We'll talk soon. Okay, thanks.
Whether you're ready to request a review or want to see the Medplace platform, we're available to help.