Subscribe Now
Play Video

Part 3 of 4: CDO Interview – Lydonia & Lifepoint Health with Chris Hutchins, Lifepoint Health

Todd Foley: Hello, and welcome to the CDO Magazine interview series. I’m Todd Foley, CDO & CISO with Lydonia. Today, I have the pleasure of talking with Chris Hutchins, SVP, Chief Data and Analytics Officer at LifePoint Health. Chris, good to see you again.

Chris Hutchins: It’s a pleasure to see you too, Todd. It’s been an interesting few weeks, a lot of activity around CDO Magazine, and a lot of great discussions. So, really excited to be here and to chat with you today.

Todd Foley: There’s not just the consideration around data privacy or security, but also bias, right? And it’s not something that we necessarily mean in a biased, prejudicial way. There’s no technology that’s intended to provide bad care, but just the perspective of that AI, or how it was developed or how it was applied, there are considerations here that are different than what I would call the traditional objective truth of data. But this has certainly been the case with any kind of analytics and decision making based on those analytics in the past. Are you doing anything unique to that, or are you applying kind of the things that have held you in good stead in dealing with analytic bias in the past?

Chris Hutchins: I think we’re pretty early on in most of the use cases that we’ve been using. It’s on purpose because one of the things that we’re not going to fix with a high or any technology is, we don’t know what we don’t know, first of all, and is that thing we don’t know important and relevant? We don’t know that either. And that’s a problem that’s not going to go away no matter what we do. If you think about when stealth technology became known to the United States population, it was 30 years old when it happened. We will only find out about it. So, there are two things that I think about: the nefarious thing, and that’s where most people’s thinking goes when we’re talking about bias. It’s something that’s just purposefully being ignorant about certain things, or you prefer to believe that things are this way and you don’t want to dig into it. But the pieces that are perhaps more frightening are when there are things that are relative to healthcare that we don’t know. And sadly, we’re seeing this constant evolution and discovery of new problems and illnesses, and all that, and for just a variety of reasons, no doubt for that, that probably are largely due to our improvements of innovation. Things that are going back to what we don’t know—when we were trying to figure out what to do and what this COVID thing was, we knew what the signs and symptoms were based on, you know, a handful of clinicians that were encountering it and starting to communicate that far and wide. But when we started, it looked like a whole bunch of other things. And if we assume that we had the models nailed back then, we would have been wrong. We would have missed things. And of course, it would have been even more deadly than it turned out to be. But I think that’s a risk that we’ve got to just keep in our minds when we’re thinking through what are the possible areas where we could run into a problem. And it’s the missing information scenario that, to me, is the worst because it’s easier to suppress information and be creative enough to create really horrible information, knowing full well it’s going to be damaging, right?

Todd Foley: Yeah, I think that’s a great lesson, right? I think COVID is a good example that if you were just applying traditional diagnoses and historical analysis to things, you’re never going to find those things that you truly haven’t seen before. And you’re going to be slow to respond to them. And I think being able to maintain that judgment and to be able to continue to work with whatever the technology or the diagnostic tool sets are, instead of being dictated to by them, is a critical part of this that I don’t think anyone would argue in healthcare.

Chris Hutchins: You did mention the imaging, though, and I think that’s an important one. I think there’s a ton of upside there, and I think that we’ll see some adoption accelerate over the next year or two in particular. But again, there’s a real need for caution there because if you’re looking at chest x-rays, I actually saw a nice presentation from one of the big, big pharma companies that had some really promising results using the imaging to identify COVID. But again, there was so much that looked like something else that they couldn’t get nearly the accuracy rate that they were wanting to get. So, it’s just one of those things. If we can augment—I think you used that word, and that’s the right one—we really have to make sure that we have the human interaction with that, with the technology, with those capabilities, because there’s just so much that’s not cut and dry. And the other piece that we have to be keenly aware of when we’re using this kind of capability is there is no human humanity about it. There’s no sensitivity. There is no understanding of human emotion, human pain. That’s not there. So, these things are really meaningful. We can use them to augment what a caregiver or provider has at their disposal at the point of care, and if we kind of stay focused and in that kind of direction, I think we’re going to make some really great strides. But if we lose sight of that, I fear that our arrogance could really bite us. If you recall, back—I’m dating myself a little bit—for the first Gulf War, they had these out-of-control, raging wildfires, and everyone was convinced it’s going to be an ecological disaster worldwide. There was all this focus on it in the news stories, and then one day it rained. Like, oh, didn’t account for that, because you’re so focused on something and you’re trusting that you’ve cracked the code, and then, wait a minute, there’s something else.

Todd Foley: I think there’s always going to be that need to keep the humanity in any kind of clinical care, and I think there’s a lot that can be accomplished simply by focusing on that augmentation or assist approach. But there’s danger there too, right? We’ve seen cases where it’s almost a check-the-box instead of truly involving human review. And I think it’s just human nature. If you have some technology, some process, AI or otherwise, where, um, it’s making a recommendation and then asking someone to click, like, yes or no, or go or don’t go, and they have to do that 3,000 times a day, you’re not going to be paying that much attention unless it’s—as long as it’s got high results. And there’s got to be some way, and I think this is a big part of a lot of the application and design, where you can maintain that humanity in the process.

Chris Hutchins: Yeah, like I said, we’re moving at a slow pace on purpose here, slow and deliberate, but those are the things that we’re going to have to build into the technology so that we have detection capabilities there to start to pick up on the fact that, oh my gosh, this person’s in and out in like, 5 minutes, and this individual here is taking an hour. Why the difference? So, I mean, it’s not that cut and dry. It’s not that simple, but we just have to accept that there’s a responsibility that we have to our patients and our communities and to our providers to make sure that we’re putting mechanisms in place to keep us on top of that, make sure there’s visibility, that we’re measuring those types of things. We don’t make human nature the default because we like things to be quick and easy. And I prefer to be able to have somebody put a stake in front of you and go through the process myself if I have a choice sometimes. But this isn’t fast food.

Todd Foley: But nothing wrong with fast food, though, but I like the thoughtful way that you’re approaching it. I think you’re really epitomizing the best of what we counsel our clients to strive for, which is move with caution, but move. Look for those opportunities to gain advantage, to be able to drive better care, to be able to provide greater impact to the community. Don’t be paralyzed by some of the uncertainty. These technologies—well, certainly, it’s very new how some of them are being applied, and the scale to which they’re being applied. They’re not new. And some of the same cautions and considerations that we’ve applied to technology over the last several decades, especially in healthcare, are equally applicable. Just, you need to be thoughtful about that with these things, and I certainly love the way you’re doing it and the way you’re thinking about it, because I think that’s the surest path to being able to gain benefit without mis stepping.

Chris Hutchins: Yeah, and we’ve got a pretty incredible team that I get to work with here, and we’ve got people that are looking at all the things that shouldn’t be a burden to the clinical teams to have to look at. One of the things that we’ve got to figure out is making it really easy for a patient to have some level of understanding that enables them to make a decision and consent. Here’s some of the stuff we’re talking about. And that’s not something that I’m just going to assume that the docs can handle that. But there is a challenge there because there’s a trust relationship that a patient has with their provider, so we just got to make it really easy. There’s got to be some really simple things and tools that we can put in the hands of our clinicians and the support staff that can help with those things. Because they’re probably not going to get excited to get a call from some person that has nothing to do with clinical care, and I don’t want to talk to him about AI. That conversation said, and it’s not going to actually make it a lot less likely somebody would want to be involved in that. But that’s just an example of something that people like Chief Data Officers, your IT teams, your project managers, your legal and risk and compliance teams, those are things that we need to worry about. And I say worry about, we need to take care of them and let’s do our due diligence and figure out how do we say yes to make progress on something that’s going to be impactful for a provider and a patient relationship. And I’ll call this out too—look at our legal, our HR team, our risk and compliance teams, info security, all of these folks are really committed to evaluating things in a very robust way, but they move in a really great case from my perspective, comparatively to what I’ve seen in other organizations. And, you know, that’s a good, that’s a good sign of things to come for us because everyone’s really engaging and they see the value, and we’re all running in the same direction.

Todd Foley: Chris, thank you for joining me today. For those listening, please visit cdomagazine.tech for additional interviews. Have a great day and a great weekend.

Chris Hutchins: Thank you. It’s been a pleasure, Todd.

Tod Foley: Thanks, Chris.

Follow Us
Related Videos
Add to Calendar 12/8/2021 06:00 PM 12/8/2021 09:00 pm America/Massachusetts Bots and Brews with Lydonia Technologies On December 8, Kevin Scannell, Founder & CEO, Lydonia Technologies, will moderate a panel discussion about the many benefits our customers gain with RPA.
Joining Kevin are our customers:
  • James Guidry, Head – Intelligent Process Automation CoE, Acushnet Company
  • Norman Simmonds, Director, Enterprise Automation Expérience Architecture, Dell TechnologiesErin
  • Cummings, CIO, Norfolk & Dedham Group

We hope to see you at Trillium Brewing on December 8 for craft beer, great food, and a lively RPA discussion!
Trillium Brewing, 100 Royall Street, Canton, MA