top of page
Ethical Challenges to Positioning Artificial Intelligence in Healthcare

Artificial Intelligence (AI) was founded decades ago on the presumption that machines in the future could emulate human cognitive functions [1]. The early research decades ago on such advanced technologies is paving the way for computer scientists in developing AI systems that now have broader applicability, including applications in healthcare, where medical professionals are beginning to see them as powerful partners. The involvement of AI in medicine has evolved over the last five decades. More recently, the interest and advancement of medical AI applications have shown momentum due to tremendous improvements to predictive algorithms and the availability of powerful computing resources and digital data [2,3,4,5]. The healthcare-based AI tools can allow for more informed and personalized care experiences by synthesizing information about the patient and supporting physicians by providing more effective treatment options. While there are considerable benefits to using AI in healthcare, such technologies also create a unique set of ethical challenges that are vast and complex. Issues such as privacy and security, bias, trust, accountability, and responsibility have dominated to date when using AI tools in healthcare [6,7,8,9]. AI needs large datasets to learn and perform decisions, but data relating to ethnic, racial, gender, and socioeconomic groups of the human population, are not sufficiently and accurately represented in the modern healthcare datasets. The data disparities lead to unfair representation by algorithms making erroneous conclusions that could prove harmful and discriminatory against groups of people. The AI tools are amplifying bias, compounding existing healthcare inequalities. Then there are intellectual property concerns and data privacy issues resulting in a lack of transparency in the overall AI tool development and testing process. Many fear AI and see it as a black box as it is harder to explain the decisions it generates. There are also increasing security concerns as the normal functioning of the AI tools could be affected and the risk of patient personal data compromised by hackers. Finally, the providers and patients may not sufficiently understand and trust the new AI technologies showing considerable resistance when implementing or using such intelligent systems in healthcare environments [10]. Policymakers and businesses must address ethical concerns before truly embracing such automated solutions as part of the future of healthcare [11,12]. Data governance structures related to using AI technologies in healthcare should evolve, ensuring that ethical principles are applied to reduce risks for patients and providers [13].

Ethical Challenges

Healthcare systems are under tremendous strain with the shortage of resources and ever-increasing demand for quality care. There is a need for AI solutions within healthcare organizations to support the growing demand and augment decision-making capabilities. Ethical challenges pose a barrier to successfully implementing AI-driven solutions in healthcare. Such roadblocks must be addressed for AI innovations in healthcare to scale, and a sound framework developed to protect patients from harm resulting from unethical behavior. It is essential to understand the underlying issues causing ethical challenges and take an interdisciplinary approach and rely not only on natural sciences but also on humanities to provide critical viewpoints into how we can position these automated technologies in healthcare. More specifically, the ethical challenges relating to informed consent, safety and transparency, data privacy, and algorithmic fairness and biases.

Informed Consent

In the most basic sense, the process of informed consent in medical intervention is straightforward and varies in scope depending on the level of the patient's care. The patient is informed by the care provider of the proposed test or the surgical procedure, followed by the benefits and the risks and any other alternative options. Based on the information provided by the provider, the patient either provides or declines the consent to proceed with the treatment option. When cognitive technologies are added to healthcare decisions, the consent process becomes more complicated with emerging ethical questions [14]. Clinicians are increasingly turning to AI tools that do not have regulatory oversight to support them with patient care decisions. Often physicians do not get consent from patients or their families to use the AI tool as the physician does not have sufficient information on the tool's inner workings to disclose appropriate information to the patient who has the choice to pursue the medical treatment plan. The underlying issue is that the physician does not understand how the tool arrived at the decisions or if the tool is trained on unbiased medical data and capable of providing decisions for the particular patient group. The physician is also not knowledgeable in the specific AI tool's prior success or failure rates to inform patients and allow them to make informed decisions. In such instances, with a lack of disclosure, patients are unaware that their treatment plan involves a decision made by the AI tool. When the model makes a biased decision, it can even prevent patients from getting the necessary care or intervention that may not be necessary and can lead to other complications or even death for the patient. Physicians emphasize that their decision does not rely solely on the particular AI tool but is a small part of their overall decision-making process involving various other factors. Also, they emphasize that the AI tools are like any other medical tools and part of routine clinical care that doesn't require getting explicit consent from the patient [15].


AI's influence in healthcare is eclipsing more rapidly than expected. New laws, guidelines, best practices, and standards are evolving to resolve some of the ethical issues related to the informed consent process. Healthcare providers will have to do their part and be vigilant with legal and ethical responsibilities by disclosing AI usage in patient treatment decisions. Physicians using AI tools for medical intervention should be more self-aware of the technology, including its limitations. They should be able to alleviate patient fears and inaccurate perceptions about AI technologies with appropriate engagement when discussing treatment plans and include the risks and benefits of using such tools [16]. The social and cultural aspects could significantly influence the informed consent process, and providers should respect and make patients aware of their autonomous decision [17].

Safety and Transparency

Healthcare is a safety-critical domain, and when AI solutions as diagnostic tools are used for medical intervention, the safety and reliability of such tools become critical. Decision-based AI solutions rely heavily upon large training data sets to provide accurate patient diagnostic options for clinicians. Much of the success of integrating AI tools safely with the healthcare environment depends on the availability of accurately and sufficiently represented data. There exist numerous legal and cultural challenges associated with sharing and utilizing health data for AI applications. Privacy protection laws may limit information sharing mainly if such data includes sensitive personal information. In addition, embedded bias and less accurately recorded data in health systems may limit the accuracy of the AI tool analysis. Limited centralized health data sources also prevent AI tools from providing comprehensive diagnostic analysis. Patient records could be in siloed health systems that the AI tool may not have access to include in its decision-making [18]. The other factor that can affect patient safety when using these technologies is the algorithms embedded within the AI tools. The limited transparency and understanding of the algorithms, commonly referred to as the black box, makes it difficult for providers to assess the safety and effectiveness of the AI tools. Additionally, with these tools, it is hard to explain the decisions related to patient diagnosis, which can change over time as they continuously learn and adapt. Limited information shared on the AI tools by their developers also makes it difficult for healthcare providers to understand the testing methodology used and if the tools were validated across multiple health environments and using various patient groups. Over-reliance on the AI tool, known as automation bias, can occur when providers ignore erroneous decisions from the tool and jeopardize patient safety. Unintended biases caused by algorithms can further add to healthcare inequalities and patient safety issues [19,20].


The availability of high-quality representative bias-free healthcare data will allow developers to build AI applications that provide transparent and equitable decisions for clinicians. To that extent, policymakers and industry leaders should discourage inaccessible data silos in various healthcare settings and consider creating secure cloud-based centralized healthcare data warehouses to assist developers in building and training AI tools. Also, experts should enact appropriate policies, procedures, and standards to prevent risks associated with data misuse and privacy. For AI tools to be deployed successfully in healthcare settings, best practices should be established for tool implementation, bias-free data collection, and handling interoperability concerns with other medical systems [21]. Patients and clinicians should be educated to believe that using AI tools in healthcare settings is safe and effective. Establishing trust among patients and clinicians for AI tools requires transparency and clinical credibility. Developers should combat the black box problem and foster transparency. They should ensure the AI tools are explainable and the results of the predictions from the tools are making sense for clinicians who are finally using them. A robust FDA approval process, including posting updates, could also ensure explainable AI and trust among the public. Detecting architecture bias within AI tools could be difficult, and understanding the data used by the tool, including frequent monitoring, could prevent such issues [20]. Finally, when AI tools are deployed in clinical settings, a framework needs to be developed to handle the accountability concerns about the possible safety consequences of using such tools [22].

Data Privacy

The AI tools in healthcare using historical patient data consists of medical and contextual data to derive decisions for clinicians to improve diagnosis and disease understanding [23]. However, the adoption of AI in healthcare is not straightforward as the future of AI rests on how well the privacy and security of the patient data are assured and public trust in such tools is established. When building AI tools, developers should consider federal and state laws and regulations concerning the patient data they are collecting [24]. Medical data is highly personal, and handling data privacy and security issues are essential when designing and developing AI solutions for healthcare. When health data is leveraged for AI tools, the data should be de-identified to safeguard patient information. De-identifying health data includes removing patient identifying markers such as names, birth dates, gender, etc. Under HIPAA rules, the de-identified data is unprotected and available for AI solution developers to combine with other information as training data. But such data, even when de-identified, can be easily re-identified when combined with other datasets [25]. AI tools performing predictive analysis require large medical datasets to learn and provide decisions. Hackers consider such medical datasets treasure troves and are constantly under cyberattacks. Medical data for patients are accessed by individuals from various departments within the healthcare facility and outside by other healthcare providers, which are primarily web-based, making them vulnerable to attacks. Additionally, outdated applications and legacy software used within healthcare organizations are vulnerable to many attacks. Negligent and untrained personnel can also unknowingly cause data breach issues. When using AI in healthcare, there are many instances where clinical data is consolidated from multiple institutions for the algorithms to get understanding and insight. The data shared within public and private institutions can cause data breaches, including patient consent and privacy issues [26]. Integrating AI tools with legacy applications within healthcare organizations also can pose several challenges, including data security and privacy issues. AI is driving research and development in healthcare which depends on vast volumes of patient data. Such patient-centric data is expected to grow exponentially, increasing stress on the data centers and security concerns.


The solution to managing data privacy and security issues requires a combination of novel approaches and techniques. To assist with the development and validation of AI tools in healthcare, synthetic datasets as an alternative could allow for capturing various patient characteristics of real datasets while avoiding issues of patient privacy and risks of personal identification [27]. De-identification of healthcare data plays a crucial role in ensuring patient privacy. However, the de-identification task requires collaboration at all levels of the healthcare ecosystem where health data is created, handled, stored, and transmitted. Institutions, where data is leaving their control should ensure the health data with personally identifiable characters are correctly de-identified [28]. Healthcare is not immune to cyberattacks and is on the rise. Healthcare organizations should prioritize the security of their systems and networks. Newer security technologies such as Cyber-AI could constantly analyze data flows and build an understanding of normal and abnormal instances to identify sophisticated cyberattacks. Ideal security for healthcare systems should support business as usual and automatically investigate incidents as soon as they are detected and perform appropriate targeted actions [29,30]. Healthcare organizations should train and educate staff on data privacy and security. Also, organizations should upgrade legacy systems and constantly update to the latest software to thwart security issues [31]. 

Algorithmic Fairness and Bias

AI-enabled solutions have tremendous potential to transform healthcare, but there are troubling indications of bias in AI decisions that can have catastrophic consequences on patient care. In one study, the algorithm used by healthcare providers favored white patients over black patients for extra medical care because the algorithm relied on past patient healthcare spending. Black patients historically had less access to healthcare and therefore had less opportunity to pay for and receive care. While there have been many advances in algorithm development in recent years, the concerns relating to algorithmic bias still exist, leading to unintended or suboptimal outcomes [32]. Bias occurs when healthcare datasets used by AI do not reflect accurate population distribution resulting in decisions made by tools being different from actual estimates. Algorithmic bias is not exclusive to race but is also exacerbated by other factors such as gender inequalities. For example, cardiovascular disease has different patterns for men and women, but prediction models trained predominantly with data samples of men may not accurately diagnose women. AI needs large datasets to learn from data patterns and perform decisions, but data relating to ethnic, racial, gender, and socioeconomic groups of the human population, are not sufficiently and accurately represented in the modern health datasets. The data disparities lead to unfair representation by algorithms, exacerbating healthcare inequalities. When an AI tool is used as a diagnostic approach for patients who are minimally represented within the training datasets, the tool fails to diagnose entire patient groups. Sources for bias exist in most stages of the algorithmic development process. Data-driven bias occurs when AI training datasets are not representative of the human population. Algorithmic bias occurs when AI models using biased data reinforce patterns from the dominant model's trained data category. Human bias occurs when there is a lack of diversity within algorithm development teams. Developing inclusive technologies such as AI for healthcare relies on comprehensive datasets that do not exclude human population diversity. 


Design for fair implementation of AI in healthcare should include trust, openness, and diversity in medical datasets. In recent years, open data sharing is becoming more evident between public and private organizations in order to quantify and respond to health emergencies. Responsible data sharing frameworks such as federated learning systems with openness at their core could enable adequate training of AI algorithms. Robust anonymization processes could be conceived at data collection locations to protect patient privacy. AI tools require high-quality data from various sources, and data standardization encourages patient groups to capture their characteristics that facilitate interoperability and fair AI. When AI tools have limited access to unbiased datasets, artificially generated synthetic data could be used to augment underrepresented population groups. Similar to other healthcare interventions like clinical trials, AI algorithms should be rigorously field-tested to assess the performance and bias issues before being used for diagnosing or treating population groups. AI algorithm developers often employ the so-called black-box approach, where it is impossible to understand the model's decisions. Developers should be transparent and highlight the strengths and weaknesses of the model's decision-making process, including the training data used during the development lifecycle. Organizations developing AI solutions should also ensure diversity within their development teams. Participation from underrepresented groups could be encouraged to identify biases in the health datasets against their communities [33,34]. Understanding the patient's social determinants is another critical factor, and integrating such measures besides clinical data could also mitigate issues concerning algorithmic bias in healthcare [35].


AI in healthcare is anticipated to grow and promises to provide exciting opportunities for the future. It will propel innovations in supporting healthcare providers and patients with every facet of the care. Ethical concerns pose a barrier to successfully implementing AI-driven solutions in healthcare. Such roadblocks must be addressed for AI innovations in healthcare to scale. Until regulations are enacted by the policymakers and safeguards are deployed for patient safety, healthcare providers who plan to deploy AI solutions must work with such technologies responsibly. Governments should enact friendly policies, and organizations developing healthcare solutions must address ethics during every stage of the AI development lifecycle to improve care delivery and quality, safety, and patient experience.


[1]: Technology assessment November 2020 Artificial Intelligence. (n.d.). Retrieved April 13, 2022, from 


[2]: Cohen, S. (2020, June 5). The Basics of Machine Learning: Strategies and Techniques. Artificial Intelligence and Deep Learning in Pathology. Retrieved April 12, 2022, from 


[3]: Artificial Intelligence (AI) – what it is and why it matters. SAS. (n.d.). Retrieved April 12, 2022, from 


[4]: Define_me. (n.d.). Retrieved April 12, 2022, from 


[5]: Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V., & Biancone, P. (2021, April 10). The role of Artificial Intelligence in healthcare: A structured literature review - BMC medical informatics and decision making. BioMed Central. Retrieved April 12, 2022, from 


[6]: Ai effect: How ai is making healthcare more human. GE Healthcare Systems. (n.d.). Retrieved April 12, 2022, from 


[7]: Spatharou, A., Hieronimus, S., & Jenkins, J. (2021, July 1). Transforming healthcare with ai: The impact on the workforce and Organizations. McKinsey & Company. Retrieved April 12, 2022, from 


[8]: How AI is Revolutionizing Healthcare: USAHS. University of St. Augustine for Health Sciences. (2022, February 8). Retrieved April 13, 2022, from 


[9]: Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J. C., Malhotra, N., Lui, V., & Gibson, J. (2021, February 15). Artificial Intelligence for Good Health: A scoping review of the ethics literature - BMC Medical Ethics. BioMed Central. Retrieved April 12, 2022, from 


[10]: Artificial Intelligence (AI) in Healthcare: Applications, risks, ethical and societal impacts. European Science-Media Hub. (2022, February 16). Retrieved April 12, 2022, from 


[11]: Davies, C. W. (2021, June 23). Why Healthcare Providers need a policy on AI ethics. Pinsent Masons. Retrieved April 12, 2022, from 


[12]: Rigby, M. J. (2019, February 1). Ethical dimensions of using artificial intelligence in Health Care. Journal of Ethics | American Medical Association. Retrieved April 12, 2022, from 


[13]: 3, S. G. N., Gattadahalli, S., Gattadahalli [email protected] @SGattadahalli, A. the A. R. S., Gattadahalli [email protected] @SGattadahalli, S., says:, M. N., & says:, samiha_F. S. (2020, November 4). Ten steps to ethics-based governance of AI in health care. STAT. Retrieved April 12, 2022, from 


[14]: Risk management tools & resources. Artificial Intelligence and Informed Consent | MedPro Group. (n.d.). Retrieved April 12, 2022, from 


[15]: Rebecca Robbins and Erin Brodwin July 15, Robbins, R., Brodwin, E., About the Authors Reprints Rebecca Robbins Health Tech Correspondent Rebecca covers health technology. She is the co-author of the newsletter STAT Health Tech. @rebeccadrobbins Erin Brodwin Health, Rebecca Robbins Health Tech Correspondent Rebecca covers health technology. She is the co-author of the newsletter STAT Health Tech. @rebeccadrobbins, Correspondent, E. B. H. T., says:, L. A. W. R. E. N. C. E. L. Y. N. N., Says:, E. H., says:, B. S., says:, J. R., says:, R., says:, U., & says:, J. (2020, July 31). An invisible hand: Patients aren't being told about the AI systems advising their care. STAT. Retrieved April 12, 2022, from 


[16]: Risk management tools & resources. Artificial Intelligence and Informed Consent | MedPro Group. (n.d.). Retrieved April 12, 2022, from 


[17]: Cultural differences and the understanding of informed consent. (n.d.). Retrieved April 13, 2022, from 


[18]: Ross, P., & Spates, K. (2020, October). Considering the safety and quality of Artificial Intelligence in health care. Joint Commission journal on quality and patient safety. Retrieved April 12, 2022, from 


[19]: Transparency and responsibility in Artificial Intelligence. (n.d.). Retrieved April 13, 2022, from 


[20]: Technology assessment November 2020 Artificial Intelligence. (n.d.). Retrieved April 13, 2022, from


[21]: Ross, P., & Spates, K. (2020, October). Considering the safety and quality of Artificial Intelligence in health care. Joint Commission journal on quality and patient safety. Retrieved April 12, 2022, from 


[22]: Naik, N., Hameed, B. M. Z., Shetty, D. K., Swain, D., Shah, M., Paul, R., Aggarwal, K., Ibrahim, S., Patil, V., Smriti, K., Shetty, S., Rai, B. P., Chlosta, P., & Somani, B. K. (1AD, January 1). Legal and ethical consideration in artificial intelligence in Healthcare: Who takes responsibility? Frontiers. Retrieved April 12, 2022, from 


[23]: Tucker, A., Wang, Z., Rotalinti, Y., & Myles, P. (2020, November 9). Generating high-fidelity synthetic patient data for Assessing Machine Learning Healthcare Software. Nature News. Retrieved April 12, 2022, from 


[24]: Blum, M. (2021, August 6). Securing Healthcare AI with confidential computing. BeeKeeperAI. Retrieved April 12, 2022, from 


[25]: Sagar, R. (2021, August 2). Why de-identifying data doesn't ensure privacy. Analytics India Magazine. Retrieved April 12, 2022, from 


[26]: Ahmed, D. (n.d.). The importance of data security in AI-Enabled Healthcare. Blog. Retrieved April 12, 2022, from 


[27]: Tucker, A., Wang, Z., Rotalinti, Y., & Myles, P. (2020, November 9). Generating high-fidelity synthetic patient data for Assessing Machine Learning Healthcare Software. Nature News. Retrieved April 12, 2022, from 


[28]: Sophie Stalla-BourdillonSenior Privacy Counsel & Legal EngineerMay 7, Stalla-Bourdillon, S., & Senior Privacy Counsel & Legal Engineer. (2022, March 23). What is data de-identification and why is it important? Immuta. Retrieved April 12, 2022, from 


[29]: Insider, W. I. R. E. D. (2020, April 30). Ai in Healthcare: Protecting the systems that protect us. Wired. Retrieved April 12, 2022, from 

[30]: Majumder, A. K. M. J. A., & Veilleux, C. B. (2021, May 11). Chapter: Smart Health and Cybersecurity in the era of Artificial Intelligence. IntechOpen. Retrieved April 12, 2022, from 

[31]: 5 effective ways to prevent data breaches. Cipher. (2020, September 8). Retrieved April 12, 2022, from 

[32]: Charlyn Ho, M. M. (2020, August 31). How to mitigate algorithmic bias in healthcare. MedCity News. Retrieved April 12, 2022, from 


[33]: Define_me. (n.d.). Retrieved April 12, 2022, from 


[34]: Shastri, A. (2020, September 17). Diverse teams build better AI. here's why. Forbes. Retrieved April 12, 2022, from 


[35]: HealthITAnalytics. (2021, September 10). Using SDOH data to enhance artificial intelligence, outcomes. HealthITAnalytics. Retrieved April 12, 2022, from

bottom of page