W Power 2024

Ethical AI in healthcare

Who stands to benefit? And, who is at risk of being either burdened or potentially put at risk by the use of these technologies?

Published: Jul 28, 2020 01:21:29 PM IST
Updated: Jul 28, 2020 01:30:36 PM IST

Ethical AI in healthcareImage: Shutterstock
Artificial intelligence is often described as a disruptive technology, and in the realm of healthcare, the disruption will manifest itself as much in how we think as in what we do

A couple of years ago in her opening remarks to the AI for Good Health Summit, Margaret Chan, the former Director General of the World Health Organization, encapsulated the moment that we find ourselves in right now. To paraphrase her, this is a new frontier, and it is still early days for AI in healthcare.  But, as so often happens, the speed of the advances is likely to outpace our ability to reflect on them in terms of sound policy and ethical considerations.

I was particularly struck by her concluding comment, which was that ‘We don't know yet which questions we ought to be asking’. 

I’ve thought about this a lot, and what she meant was the need to ask questions like, Who stands to benefit?  And, who is at risk of being either burdened or potentially put at risk by the use of these technologies?  Also, How do we avoid bias in the data? 

We all know that our data sets are incomplete and that the quality of data in general isn't great. To state the obvious, if we're working with data that is not of great quality, we’re going to see AI outputs that are not of great quality.  How can we put this the agenda at organizations everywhere to ensure that we're not doing harm, even while  there is tremendous opportunity for benefit here?

Questions have also surfaced around the nature of the patient-provider relationship. For instance, Is AI going to reshape this relationship in the direction that we want it to be reshaped?  And who has the right to determine what that reshaping will look like?  How will it affect health work?  Who will it displace?  And what about unforeseen consequences?  These are all urgent questions. 

When I’m teaching I like to use the spread of electricity as an analogy for what is happening with AI. I will put a photo of a lightbulb on the overhead screen and ask my students, ‘What has the lightbulb offered us?’  Often what comes up are things like, ‘It means that we can work 24-hours a day’; ‘We can recreate in the evening’; ‘we have safer homes because we’re not worried about oil fires’; or ‘they make our streets safer’.

All of this is true; but I wonder, how many of you find yourself searching on Google about ‘how to deal with insomnia’? Yes, we can work longer because of electric light, but we're also working longer in a way that is making us less productive.  Too much light actually causes harm.  And it harms the environment as well, in terms of light pollution and the impact that it has on certain animal species. Clearly, there have been many unintended consequences to this technology in terms of the burdens that it places on us.  This irony of technology is something that is often not often seen or discussed.  

Today we see electric light as a form of innovation and as a tool that allows us to get tasks done. But I would argue that it is also a complex social phenomenon. The lightbulb is useful because it is used: we have come together and mobilized around it, and by virtue of its use, it has reshaped our lives in profound ways.

I believe we need to see AI through a similar lens—as a form of innovation and a new set of tools, but also as a complex social phenomenon—and perhaps even something that could reshape us and how we relate with each other.  When we do this, it opens up different sorts of questions that we need to be asking. More generally it calls upon us now to ask different sorts of questions related to four areas, and I'd like to touch on each of those.

These themes surfaced through our own research at the Joint Centre for Bioethics. AI and the future of caring invites us to ask, What kind of future do we want?  What kind of future of health and healthcare do we want?  Can we even articulate that?   There are a number of possible futures we might imagine.  One might be that we actually don't want to see AI-enabled technology because there's too much that we would lose that matters to us.  Maybe we want something that is fully enabled.  Maybe there's something in between. 

In many ways these technologies are going to be shaping us even before we've answered this question.  We'll wake up one morning and realize that we have been shaped.  But maybe there is an opportunity for each of us, in our own settings and in conversations with our colleagues and at the dinner table, and with society, more broadly, to ask the question, What are we really working toward?  What would we be willing to give up in order to realize the benefits? And can we build some consensus around that? 

How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and on the other, ensure that we're continuing to care?  What would that world look like?  How can we maintain the reason why we came into medicine in the first place, because we care about people, how can we ensure that we don't inadvertently lose that? 

The optimistic view is that, by virtue of freeing up time by moving some tasks off of clinicians’ desks, and moving the clinician away from the screen, maybe we can create space, and sustain space for caring.  The hope that is often articulated is that AI will free up time, potentially, for what really matters most.  That's the aspiration.  But the question we need to ask ourselves is, What would be the enabling conditions for that to be realized? 

Right now, we still have a health system that is driven by a focus on efficiency. ‘We've got to do things faster’; ‘We need to see more patients’. This is causing burn-out, and at the same time, clinicians are losing that connection to touch, that caring opportunity, as well. 

This is the downside risk that we need to have our eyes on:  The possibility that in freeing up time, we will actually be freeing up time not for more caring, but freeing up time to continue try to see more and more patients.  We need to call out that possibility and start to think about, ‘What will enable us to prevent that from happening?’  If caring is something that we want to see thriving within our health system, what will it take for us to do that? 

Of course, each one of us is currently, or will be at some point in the future, a consumer of our health system.  And so, the question we may ask would be, what would be the essential conditions for public trust in an AI-enabled health system? 

There's not a whole lot of data or insight yet around what Canadians think about AI, AI-enabled technologies, virtual technologies, or digital technologies.  But there have been a couple of surveys that I want to reflect on here.  Trust is something that is built over time, and it's built through relationships. It’s built through decisions and actions that are taken in a way that is consistent that sets up expectations of others that we will behave, act, and decide in particular ways.  And this becomes reliable. That is the nature of building trust over time.

A couple of public opinion surveys were done over the last year, asking Canadians, ‘What type of organizations do you trust?’  Typically, Canadians trust not-for-profit organizations the most and, in particular, healthcare organizations and universities. Both hover around 70 per cent trust. As an aside, where do you suppose life insurance companies are?  Around 25 per cent. 

This sends those of us working in healthcare a few important signals.  If we are going to see innovation in AI, which organizations, which sort of actors, have a bank of trust that they can rely on in order to participate in the types of social innovation that might be required, in order to see AI generating some of the benefits that we hope for it?  And correspondingly, we can start to see that life and health insurance companies have a really steep hill to climb in order to be able to do this. 

What is it about the way which universities and healthcare organizations operate that engenders trust from the public?  And what is the nature of that trust? 

I'm not sure we have a really robust answer to that question, because we've not had to ask it.  Looking ahead, in terms of the use of digital technologies in AI, a question might be, How can we sustain that level of trust—even while introducing some disruption that contains unknowns and for which there may be unseen consequences? 

If we get it wrong, we at universities and healthcare organizations have a steep hill to fall off of; but I think we've got enough trust built into the bank that if we work with patients and the public, there is a way forward.

The Canadian Medical Association and IFSO conducted a survey last fall of Canadians recently, and found that they would trust the use of an AI-derived diagnosis most if it was delivered by a physician; less so if it was an AI system delivering it; and even less so if it was a private tech company delivering it.  This gives us an idea of the extent to which Canadians trust that interaction with a physician. 

The survey also showed that Canadians are quite excited about the use of technology in healthcare, but this survey included questions also about privacy and ethical issues, and so on, and their trust waned by the end of the survey. In part, because the questions that were being asked in the survey were seeding reflections and things the survey respondents hadn't considered. Clearly, there is an opportunity here to think about how we, as citizens, can better participate in shaping the use of these new technologies.

A third area that demands attention is to get our heads around what ethical governance would look like in these settings.  One of the advantages of being in healthcare is that there is already ethics infrastructure all over the place.  We see this in structures such as Ethics Committees and reporting up to the Board; we've got Research Ethics Boards that recognize the hand and glove relationship between ethics and good research.  As these new technologies surface in academic healthcare settings, it's starting to create questions about whether or not with the introduction of an innovation, clinical innovation, using AI-enabled technology, should it go through research ethics governance?  Is it one that should go through clinical governance? 

This issue was brought to my attention by a colleague who was curious to know, as a medical geneticist, whether or not facial recognition software was going to be a better diagnostic resource for her, and more efficient, than genetic testing.  If the facial recognition software was as accurate as the genetic testing, it might be possible to take a photograph of a child and be able to then, very quickly, move towards a diagnosis. 

She wanted to explore this, but she was advised by her colleagues, ‘Don't put it through the Research Ethics Board, it's not really research, put it through Quality Improvement’.  She started to put it through the clinical governance structure mechanism and was told, No, it should be Research Ethics Board.  Eventually, 18 months later, she was able to undertake the work because it ended up having to be taken to Legal Counsel because there were real worries about how do we govern the introduction of these innovations within then health space. So, as we thought, don't seem quite ready to do the job. 

We're starting to hear this also from regulatory bodies, for example, asking the question of the Food and Drug Administration, ‘How do we assess these technologies?  Do we have the right frameworks to do so? 

Some of you might be familiar with this particular case that arose in the National Health Service, in the UK where Royal Free Hospital was interested in being better able to predict risk of patients coming through their hospital who might have a particular type of kidney disorder. It partnered with Google DeepMind in order to develop some predictive tools that the hospital could use.  This involved handing over the patient data to Google DeepMind; but patients didn't know that, and there was debate about whether or not this was legally compliant. A number of regulatory dimensions started to surface.  This underscored, as well, that in order to maintain and sustain trust, these things needed to be transparently undertaken and there is some work to be done here. 

Canada is a great example of this.  We struggle with equity-related considerations all the time by virtue of sheer geography.  We have a massive country.  We have Canadians mostly living along our Southern Boarder, but that’s not the only place they are, and this is where the promise and potential of virtual care and AI-enabled monitoring are going to be tremendously valuable for those who may live in remote areas.  These technologies have the potential to  enable us to bridge the equity gaps that we see in our health systems currently.

But we must also ask the questions, Who should benefit and how do we ensure that those who are already vulnerable and marginalized are not inadvertently made more so?  We have gaps between our hospital sectors in terms of digital capacity, and different patient groups who are better mobilized in order to inform and to influence the direction of technology within their settings.  And we also have important questions to ask about where investments should be made, in terms of AI?  Which priorities?  Whose priorities?  And who will inform those priorities? 

So, where to from here?  Has the train left the station or is it just entering the station? 

On some days I feel like it's just left the station, and sometimes it feels like it's just entering.  Today, I think it's still in the station.  I think this is the opportunity that we have, with these larger questions, to begin thinking about how can we move forward in a boarder societal discussion about what kind of future we want, what kind of health system we want and what type of healthcare we want. We are at the point where all these technologies are still new and are very slowly being diffused into the field. Now is the time to be asking these questions and moving forward together on the answers.

[This article has been reprinted, with permission, from Rotman Management, the magazine of the University of Toronto's Rotman School of Management]

Post Your Comment
Required
Required, will not be published
All comments are moderated