The implementation of Artificial Intelligence (AI) in healthcare is not a new concept, since the 1970’s AI has been used in assisting biomedical problems, specifically in helping identify blood infection treatments according to Xsoli. Even so, AI has been an important healthcare subject in recent years over its increased use as 66% of physicians report using AI during 2024 in some way, up from 38% in 2023, according to the American Medical Association (AMA). Despite this, AI is only being used in the healthcare industry at a small level compared to the nearly endless possibilities, such as creating treatment plans for patients based on an algorithm, and even automating tasks during invasive surgeries, this can result in improved patient care, reduced costs, and streamlining many processes, according to the CDC.
For these reasons, AI should be implemented in healthcare when there is sufficient evidence to conclude AI has the ability to save considerably more lives, while improving efficiency and reducing costs. Even so, the use of AI in healthcare raises important ethical questions regarding accountability, decision making, and patient safety. Say an AI system makes a mistake during surgery or in creating a treatment plan, who is held liable? Is it the healthcare provider who is relying on the AI, the manufacturer, or the AI itself? Determining who or what should be held accountable in certain situations is important because it shapes the legal framework surrounding medical malpractice. Some may think that because there is no sure way to determine what is right and wrong in complicated situations like this, AI should simply not be used in healthcare. This discounts the many lives that could be saved though. If AI does improve patient care, reduce costs, and improve surgery success rates, in the small occasion that something does go wrong, it is certainly not worth many people’s potentially saved lives that would be lost because we avoided difficult questions in these improbable situations. Additionally, this assumes that we can’t reliably agree on who responsibility falls upon in these kinds of situations; in the likely scenario, this is not the case, a strong utilitarian mindset is no longer needed to justify the use of AI in healthcare.
There are more concerns regarding direct patient care and public safety, though, the most tricky of which is likely algorithmic and data bias resulting in discrimination. This happens when the data set an AI is trained on, or an algorithm the AI is built around, happens to not capture the whole image, or all groups of people fairly. For example, this can result in predicting disease outbreaks in certain areas improperly, affecting certain groups of people more than others. Practically though, there are ways to combat this. AI can be slowly implemented in performing sensitive tasks to verify that it actually can be reliably data and algorithmically trained to treat all groups of people fairly. Additionally, there can be regulations and benchmarks set in place regarding data and algorithms for an AI system to be trusted in making predictions about patients and public health. If this is the case, AI should most certainly be further used in healthcare. It’s unlikely any moral dilemma in the face of an improbable scenario supersedes the many lives that would otherwise not have been saved if AI wasn’t being used. This argument rests upon the fact that AI will actually save lives. What if it is only an incremental amount? Even in this case, it is important to consider the fact that AI will only further improve, and the faster AI can be used, the quicker it can improve. Hence, AI will likely save a vast amount more lives than if it was not experimentally being used to begin with. This only changes if AI actually causes more deaths than usual, in which case it is not worth people’s lives to bet on the improvement of AI.
There are additional uses for AI that do not include direct patient care, such as public health. Specifically predicting disease outbreaks. Something like this will involve an AI analyzing public reports, social media, seasonal trends, population trends, and environmental factors according to the National Center for Biotechnology Information. Of course, there are some possible issues and ethical considerations in using AI for this, though. What happens in the case that an AI makes a false prediction? Well, it could create mass hysteria, overproduction of unneeded resources, and dramatically worsen crime. Just like if an AI does something wrong during a surgery, who is to blame if the AI malfunctions in this way and causes these issues? The answer to this is not entirely clear and could result in the wrong groups of people being scrutinized or even persecuted. Additionally, issues regarding privacy and data security continue to pose a threat. As mentioned, AI analyzes public health reports and data in regard to a patient’s health history. This could be a vital problem if the AI system is hacked or breached, in turn compromising an individual’s privacy and information. Even though this is a valid concern, the goal of using AI in disease prediction is for public health, and if it is shown to be able to more accurately predict disease outbreaks than humans, resulting in many saved lives, this is simply a compromise that must be made. For the applications of AI that have been shown to improve the existing state of healthcare in survival rates without posing issues regarding racial or ideological bias, such as diagnostic imaging, and improving radiotherapy waiting times according to the National Library of Medicine, a utilitarian approach should be taken.
Categories:
A Utilitarian Approach to the Use of AI in Healthcare
More to Discover