Journal of Science Policy & Governance
|
Volume 25, Issue 01 | October 28, 2024
|
Policy Position Paper: Reducing Racial Biases within Healthcare Applications of Artificial Intelligence (AI) With Transparency
Mishayla Harve1, Sakthi Priya Ramamoorthy1, Viresh Pati1, Garen Bainbridge1, Abigayle Kankolenski1, Bratee Podder1, Matthew Sampt1
Corresponding author: [email protected] |
Keywords: artificial intelligence; AI; healthcare; healthcare equity; racial bias; transparency; data collection; accountability
Executive Summary
Artificial intelligence (AI) is increasingly being used in healthcare for applications such as drug discovery, diagnostics, disease management, and delivery of services. However, integrating AI and healthcare raises concerns about reinforcing existing societal prejudices: AI systems are known to exhibit racial biases by making inaccurate and unreliable decisions based on race when it is irrelevant to the task. Furthermore, government directives currently lack consistent standards for regulating AI and offer insufficient guidance on preventing the perpetuation of harmful racial biases, especially in healthcare. To improve patients’ quality of life interacting with AI systems, it is essential to ensure transparency regarding these systems. Additionally, it is vital to ensure that innovation dedicated to improving healthcare enhances the integrity of the patient’s experience rather than compounds existing systemic disparities. The authors propose three recommendations to address racial biases in healthcare applications of AI and emphasize the need for legislation placing AI regulation in healthcare at the forefront of healthcare policy agendas.
-Read the full article through download.-
Background header image courtesy of cnn
Mishayla Harve is an undergraduate student at the Georgia Institute of Technology in the School of Public Policy and the Department of Neuroscience. She is motivated to advocate for healthcare accessibility and community wellness.
Sakthi Priya Ramamoorthy is an undergraduate pre-medicine student at the Georgia Institute of Technology in the Department of Neuroscience. She is interested in addressing the gaps in policy so that healthcare can be more accessible to all people.
Viresh Pati is an undergraduate student at the Georgia Institute of Technology in the College of Computing. He is interested in artificial intelligence and its applications in finance, policy, and healthcare.
Garen Bainbridge is an undergraduate student at the Georgia Institute of Technology in the Department of Neuroscience. He is interested in researching the socioeconomic factors contributing to health disparities, especially human neuromuscular illnesses.
Abigayle Kankolenski is an undergraduate student at the Georgia Institute of Technology in the College of Computing and the School of Public Policy. With personal interests in AI, reading, policy, and politics, she is striving for a knowledgeable career in science and technology policy.
Bratee Podder is an undergraduate student at the Georgia Institute of Technology in the College of Computing. She is interested in ensuring that artificial intelligence is sufficiently regulated to contribute to the well-being of people.
Matthew Sampt is an undergraduate student at the Georgia Institute of Technology in the College of Computing. He has a special interest in the intersection between artificial intelligence and healthcare.
Acknowledgments
All authors are associated with the Health and Biotechnology Committee, as part of Georgia Institute of Technology’s Organization of Science and Technology Policy Connections (S&T PC). S&T PC is dedicated to exploring policy related to technology and science. We would like to thank the guidance of the faculty and leadership of the organization– without them, we wouldn’t be here today.
References
- Agarwal, R., M. Bjarnadottir, L. Rhue, M. Dugas, K. Crowley, J. Clark, and G. Gao. 2022. “Addressing Algorithmic Bias and the Perpetuation of Health Inequities: An AI Bias Aware Framework.” Health Policy and Technology 12 (1): 100702. https://doi.org/10.1016/j.hlpt.2022.100702.
- Akinrinmade, Abidemi O., Temitayo M. Adebile, Chioma Ezuma-Ebong, Kafayat Bolaji, Afomachukwu Ajufo, Aisha O. Adigun, Majed Mohammad, et al. 2023. “Artificial Intelligence in Healthcare: Perception and Reality.” Cureus 15 (9). https://doi.org/10.7759/cureus.45594.
- Barto, Andrew, and Richard Sutton. 2007. “REINFORCEMENT LEARNING in ARTIFICIAL INTELLIGENCE.” Elsevier Science. Elsevier Science. https://www.sciencedirect.com/science/article/pii/S0166411597801057?casa_token=o8AxBDIMhskAAAAA:BQVFih_6SiIlq_WzdXgaA1jEYKc3QDEih35I6DmFFDkYVmQUDFXGBeBxO6RJR5JBff4AGZJ3hA.
- Bergomi, Laura, Tommaso M Buonocore, Paolo Antonazzo, Lorenzo Alberghi, Riccardo Bellazzi, Lorenzo Preda, Chandra Bortolotto, and Enea Parimbelli. 2024. “Reshaping Free-Text Radiology Notes into Structured Reports with Generative Question Answering Transformers.” Artificial Intelligence in Medicine 154 (August): 102924–24. https://doi.org/10.1016/j.artmed.2024.102924
- Burde, Howard. 2011. “THE HITECH ACT: An Overview.” AMA Journal of Ethics 13 (3): 172–75. https://doi.org/10.1001/virtualmentor.2011.13.3.hlaw1-1103..
- CDC (Centers for Disease Control and Prevention). 2024. “Social Determinants of Health.” Public Health Professionals Gateway. May 14, 2024. https://www.cdc.gov/public-health-gateway/php/about/social-determinants-of-health.html.
- Coalition for Health AI. 2023. “Blueprint for Trustworthy AI: Implementation Guidance and Assurance for Healthcare (Version 1.0).” Coalition for Health AI. https://coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf.
- Collen, McClain, Michelle Faverio, Monica Anderson, and Eugenie Park. 2023. “Views of Data Privacy Risks, Personal Data and Digital Privacy Laws.” How Americans View Data Privacy (blog). October 18, 2023. https://www.pewresearch.org/internet/2023/10/18/views-of-data-privacy-risks-personal-data-and-digital-privacy-laws/.
- Daneshjou, Roxana, Kailas Vodrahalli, Roberto A. Novoa, Melissa Jenkins, Weixin Liang, Veronica Rotemberg, Justin Ko, et al. 2022. “Disparities in Dermatology AI Performance on a Diverse, Curated Clinical Image Set.” Science Advances 8 (32): eabq6147. https://doi.org/10.1126/sciadv.abq6147.
- Fazelpour, Sina, and David Danks. 2021. “Algorithmic Bias: Senses, Sources, Solutions.” Philosophy Compass 16 (8). https://doi.org/10.1111/phc3.12760.
- Federal Trade Commission (FTC). 1965. Federal Cigarette Labeling and Advertising Act. 15 USC. §§ 1331-1340; 21 USC. § 387c. https://www.ftc.gov/legal-library/browse/statutes/federal-cigarette-labeling-advertising-act.
- Financial Industry Regulatory Authority. 2024. “2024 FINRA Annual Regulatory Oversight Report .” Financial Industry Regulatory Authority. January 9, 2024. https://www.finra.org/rules-guidance/guidance/reports/2024-finra-annual-regulatory-oversight-report.
- Goldberg, Carey Beth, Laura Adams, David Blumenthal, Patricia Flatley Brennan, Noah Brown, Atul J. Butte, Morgan Cheatham, et al. 2024. “To Do No Harm — and the Most Good — with AI in Health Care.” NEJM AI 1 (3). https://doi.org/10.1056/AIp2400036.
- Hao, Karen. 2019. “This Is How AI Bias Really Happens—and Why It’s so Hard to Fix.” MIT Technology Review. February 4, 2019.
- Hersch J, Shinall JB. Fifty years later: The legacy of the Civil Rights Act of 1964. Journal of Policy Analysis and Management. 2015;34(2):424–456. doi: 10.1002/pam.21824.
- Lin, Ting-An, and Po-Hsuan Cameron Chen. 2022. “Artificial Intelligence in a Structurally Unjust Society.” Feminist Philosophy Quarterly 8 (3/4). https://ojs.lib.uwo.ca/index.php/fpq/article/view/14191.
- Lynch, Shana. 2024. “How Can We Better Regulate Health AI?” Stanford HAI. Stanford University. July 15, 2024. https://hai.stanford.edu/news/how-can-we-better-regulate-health-ai?mkt_tok=NTcwLVJDSC03NTgAAAGUx3IeIDAtQc_59Tsdh5U2-t6W5TUeEnj7dFVNga2IHiI3aMXrCueUIIdCwj0IUpCy-IO13jMwTMv4Ei76lXjpKcxLbycRSXKM22szpgI.
- Maind, Sonali, and Priyanka Wankar. 2014. “Research Paper on Basic of Artificial Neural Network.” International Journal on Recent and Innovation Trends in Computing and Communication 2 (1): 96–100.
- Müller, Vincent. 2020. “Ethics of Artificial Intelligence and Robotics.” In Stanford Encyclopedia of Philosophy (Fall 2023 Edition). https://plato.stanford.edu/archives/fall2023/entries/ethics-ai.
- National Institute of Standards and Technology (NIST). 2024. “Artificial Intelligence: The Vitals.” National Institute of Standards and Technology (NIST). https://www.nist.gov/system/files/documents/2023/11/02/AI%20Fact%20Sheet%200615%20FINAL.pdf.
- National Institute of Standards and Technology (NIST). 2024. “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” July. https://doi.org/10.6028/nist.ai.600-1.
- Nazer, Lama H., Razan Zatarah, Shai Waldrip, Janny Xue Chen Ke, Mira Moukheiber, Ashish K. Khanna, Rachel S. Hicklen, et al. 2023. “Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation.” PLOS Digital Health 2 (6): e0000278. https://doi.org/10.1371/journal.pdig.0000278.
- Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342.
- Rodrigues, Rowena. 2020. “Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities.” Journal of Responsible Technology 4 (December):100005. https://doi.org/10.1016/j.jrt.2020.100005.
- Rossy, Ryan. 2023. “Transforming Health Care with AI: A Conversation with Dr. Nishit Patel.” USF Health. University of South Flordia. 2023. https://www.usf.edu/health/news/2023/ai-in-healthcare-dr-patel.aspx?utm_source=usfhealth_home&utm_medium=image-link&utm_content=small_image&utm_campaign=health%20home.
- Samorani, Michele, Shannon L. Harris, Linda Goler Blount, Haibing Lu, and Michael A. Santoro. 2021. “Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling.” Manufacturing & Service Operations Management, August. https://doi.org/10.1287/msom.2021.0999.
- Singapore Personal Data Protection Commission. n.d. “PDPC | Singapore’s Approach to AI Governance.” Www.pdpc.gov.sg. https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework.
- Shanklin, Robert, Michele Samorani, Shannon Harris, and Michael A. Santoro. 2022. “Ethical Redress of Racial Inequities in AI: Lessons from Decoupling Machine Learning from Optimization in Medical Appointment Scheduling.” Philosophy & Technology 35 (4): 96. https://doi.org/10.1007/s13347-022-00590-8.
- Shimotsu, Scott, Anne Roehrl, Maribet McCarty, Katherine Vickery, Laura Guzman-Corrales, Mark Linzer, and Nancy Garrett. 2015. “Increased Likelihood of Missed Appointments (‘No Shows’) for Racial/Ethnic Minorities in a Safety Net Health System.” Journal of Primary Care & Community Health 7 (1): 38–40. https://doi.org/10.1177/2150131915599980.
- Thomas, Mandisha. n.d. Insurance; Use of Artificial Intelligence in Making Certain Decisions Regarding Coverage; Prohibit. https://www.legis.ga.gov/legislation/65973.
- United Kingdom Data Service. n.d.-a. “What Is the Five Safes Framework?” UK Data Service. https://ukdataservice.ac.uk/help/secure-lab/what-is-the-five-safes-framework/.
- United Kingdom Data Service. n.d.-a. “What Is the UK Data Service SecureLab?” UK Data Service. https://ukdataservice.ac.uk/help/secure-lab/what-is-securelab/.
- US Government Accountability Office. 2023. “Artificial Intelligence’s Use and Rapid Growth Highlight Its Possibilities and Perils | US GAO.” Www.gao.gov. September 6, 2023. https://www.gao.gov/blog/artificial-intelligences-use-and-rapid-growth-highlight-its-possibilities-and-perils.
- US President. Executive Order. "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Executive order 14410 of November 1, 2023." Federal Register 88 no. 210 (November 1, 2023): 75191. https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.
- Waisel, D. B., & Truog, R. D. 1997. “Informed Consent”. Anesthesiology, 87(4), 968–978. https://doi.org/10.1097/00000542-199710000-0003.
DISCLAIMER: The findings and conclusions published herein are solely attributed to the author and not necessarily endorsed or adopted by the Journal of Science Policy and Governance. Articles are distributed in compliance with copyright and trademark agreements.
ISSN 2372-2193
ISSN 2372-2193