AI in Healthcare – Ethical Considerations
Given the touting of recent analytic and machine learning results in healthcare, why haven’t doctors been replaced by computers yet? The truth is that there are many obstacles that stand in the way of implementing analytics in healthcare. Ethical issues introduced by this technology are also fiercely debated and must be considered. Finally, while obstacles imply a possibility of being overcome, there are also limitations to this technology, or aspects that likely will not be solved anytime soon. Let’s discuss all three of these things in this section.
What are some of the current obstacles that must be overcome in order to achieve widespread use of analytics for improving healthcare?
Healthcare has traditionally been slow to adopt emerging technologies, and this is a challenge that must be overcome. Healthcare has been described as a conservative field, one that is slow to embrace change. For example, there was some initial resistance to the idea of using electronic blood pressure cuffs in hospitals. Also facing skepticism and resistance was the advancement of electronic medical records, because of concerns that it takes away from the patient-physician interaction and increases the time required to write notes. Analytics and machine learning is certainly no exception; it is simply another new, unfamiliar technology, and while industries such as automotive and manufacturing embraced it with little issue, healthcare will likely be a different story.
Perhaps an important underlying reason why doctors resist analytics and machine learning is the fear that computers are trying to “take over” or “replace” physicians. Certainly, we are a long way off from having that conversation, in terms of money, technology, and time. The machine learning studies we have discussed in this book are trained in very specific tasks, and of course, they rely on human intuition and judgment when being trained and interpreted. More likely, successful health analytics and machine learning will be achieved by utilizing a team approach, in which human strengths (such as generalizability and breadth of knowledge) are combined with computing strengths (speed and computational precision) to yield the best possible result. Still, the concern that physicians have about being replaced by computers is a real possibility, however distant, and we must find ways for physicians and artificial intelligence to work together rather than against one another.
Another reason for skepticism toward analytics is the “hope versus hype” debate. Buzzphrases such as “big data” and “deep learning” sometimes carry negative connotations because of the hype surrounding them. Some people think that belief in these fields is overinflated. Specifically, skeptics of analytics and machine learning argue that most big data applications, while they “sound cool,” rarely save lives or money. Certainly, these concerns are valid; contributing positively to society is something that all machine learning studies should strive for, rather than simply demonstrating that something can be done.
While the field of machine learning is constantly changing, time series and natural language processing are particularly important in the field of healthcare, and these areas are weaknesses of machine learning algorithms as opposed to structured clinical data. It may be some time before an algorithm can be written that reads text, makes generalizations, and asks relevant questions like humans do.
Ethics has always been a part of considering new technologies, including computer science, and must not be ignored here. What are some of the ethical issues introduced by healthcare analytics?
First and foremost, in my opinion, is the inability to place a value or a number on human feelings and painlessness. Many machine learning models are trained using a cost function. What should the cost function be? Should it be based on decreasing costs, increasing quality and outcomes, or decreasing pain and heartbreak? Who determines the ratio with which these seemingly opposing goals should be pursued?
Another ethical issue introduced by artificial intelligence is the question of responsibility. If a machine learning model makes an errant prediction, who is to be held responsible? Should it be the physician who oversaw the patient? Or should it be the team of data scientists that made the model?
A third issue lies in the realm of patient privacy. Is it right to use patient data to train models? Should it require consent or not? Which data points should be allowed to be used?
Finally, there is the problem of bias. There is a concern that predicting outcomes on patients may depend on things such as race, gender, and age. This may lead to patient discrimination.
Limitations are those aspects of healthcare analytics that may never be overcome. What are some limitations of healthcare analytics?
Robots and computers are not human, and they currently cannot replace a human’s ability to offer comfort and empathy in the face of pain, illness, or death.
Technologies such as neural networks, while they may offer accurate predictions, suffer from the black box problem—they cannot explain their reasoning or logic to the patient. Therefore, a patient may not trust a neural network completely the way a patient trusts a good physician.
Link: AI in Healthcare – Ethical Considerations