Mercy Nyamewaa Asiedu
I am a research scientist at Google Research, and my research interests lie in developing fair and robust algorithms for equitable health in underserved settings. Before Google, I was a Schmidt Science Fellow at MIT, where I worked on different machine learning areas in medical imaging and clinical NLP. I obtained my PhD from Duke University and my thesis focused on developing devices and algorithms for cervical cancer screening in underserved settings.
Research Areas
Authored Publications
Sort By
TRINDs: Assessing the Diagnostic Capabilities of Large Language Models for Tropical and Infectious Diseases
Nenad Tomašev
Chintan Ghate
Steve Adudans
Oluwatosin Akande
Sylvanus Aitkins
Geoffrey Siwo
Lynda Osadebe
Eric Ndombi
Preview abstract
Neglected tropical diseases (NTDs) and infectious diseases disproportionately affect the poorest regions of the world. While large language models (LLMs) have shown promise for medical question answering, there is limited work focused on tropical and infectious disease-specific explorations. We introduce TRINDs, a dataset of 52 tropical and infectious diseases with demographic and semantic clinical and consumer augmentations. We evaluate various context and counterfactual locations to understand their influence on LLM performance. Results show that LLMs perform best when provided with contextual information such as demographics, location, and symptoms. We also develop TRINDs-LM, a tool that enables users to enter symptoms and contextual information to receive a most likely diagnosis. In addition to the LLM evaluations, we also conducted a human expert baseline study to assess the accuracy of human experts in diagnosing tropical and infectious diseases with 7 medical and public health experts. This work demonstrates methods for creating and evaluating datasets for testing and optimizing LLMs, and the use of a tool that could improve digital diagnosis and tracking of NTDs.
View details
The Case for Globalizing Fairness: A Mixed Methods Study on the Perceptions of Colonialism, AI and Health in Africa
Iskandar Haykel
Aisha Walcott-Bryant
Sanmi Koyejo
Preview abstract
With growing machine learning (ML) and large language model applications in healthcare, there have been calls for fairness in ML to understand and mitigate ethical concerns these systems may pose. Fairness has implications for health in Africa, which already has inequitable power imbalances between the Global North and South. This paper seeks to explore fairness for global health, with Africa as a case study.
We conduct a scoping review to propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. We then conduct qualitative research studies with 625 general population study participants in 5 countries in Africa and 28 experts in ML, Health, and/or policy focussed on Africa to obtain feedback on the proposed attributes. We delve specifically into understanding the interplay between AI, health and colonialism.
Our findings demonstrate that among experts there is a general mistrust that technologies that are solely developed by former colonizers can benefit Africans, and that associated resource constraints due to pre-existing economic and infrastructure inequities can be linked to colonialism. General population survey responses found about an average of 40% of people associate an undercurrent of colonialism to AI and this was most dominant amongst participants from South Africa. However the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism.Colonial history, country of origin, National income level were specific axes of disparities that participants felt would cause an AI tool to be biased
This work serves as a basis for policy development around Artificial Intelligence for health in Africa and can be expanded to other regions.
View details
The Case for Globalizing Fairness: A Mixed Methods Study on the Perceptions of Colonialism, AI and Health in Africa
Iskandar Haykel
Aisha Walcott-Bryant
Sanmi Koyejo
Preview abstract
With growing machine learning (ML) and large language model applications in healthcare, there have been calls for fairness in ML to understand and mitigate ethical concerns these systems may pose. Fairness has implications for health in Africa, which already has inequitable power imbalances between the Global North and South. This paper seeks to explore fairness for global health, with Africa as a case study.
We conduct a scoping review to propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. We then conduct qualitative research studies with 625 general population study participants in 5 countries in Africa and 28 experts in ML, Health, and/or policy focussed on Africa to obtain feedback on the proposed attributes. We delve specifically into understanding the interplay between AI, health and colonialism.
Our findings demonstrate that among experts there is a general mistrust that technologies that are solely developed by former colonizers can benefit Africans, and that associated resource constraints due to pre-existing economic and infrastructure inequities can be linked to colonialism. General population survey responses found about an average of 40% of people associate an undercurrent of colonialism to AI and this was most dominant amongst participants from South Africa. However the majority of the general population participants surveyed did not think there was a direct link between AI and colonialism.Colonial history, country of origin, National income level were specific axes of disparities that participants felt would cause an AI tool to be biased
This work serves as a basis for policy development around Artificial Intelligence for health in Africa and can be expanded to other regions.
View details
A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Heather Cole-Lewis
Nenad Tomašev
Liam McCoy
Leo Anthony Celi
Alanna Walton
Akeiylah DeWitt
Philip Mansfield
Sushant Prakash
Joelle Barral
Ivor Horn
Karan Singhal
Nature Medicine (2024)
Preview abstract
Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare.
View details
Nteasee: A qualitative study of expert and general population perspectives on deploying AI for health in African countries
Iskandar Haykel
Kerrie Kauer
Florence Ofori
Tousif Ahmad
Preview abstract
Background: Artificial Intelligence for health has the potential to significantly change and improve healthcare. However in most African countries identifying culturally and contextually attuned approaches for deploying these solutions is not well understood. To bridge this gap, we conduct a qualitative study to investigate the best practices, fairness indicators and potential biases to mitigate when deploying AI for health in African countries, as well as explore opportunities where artificial intelligence could make a positive impact in health.
Methods: We used a mixed methods approach combining in-depth interviews (IDIs) and surveys. We conduct 1.5-2 hour long IDIs with 50 experts in health, policy and AI across 17 countries, and through an inductive approach we conduct a qualitative thematic analysis on expert IDI responses. We administer a blinded 30-minute survey with thought-cases to 672 general population participants across 5 countries in Africa (Ghana, South Africa, Rwanda, Kenya and Nigeria), and analyze responses on quantitative scales, statistically comparing responses by country, age, gender, and level of familiarity with AI. We thematically summarize open-ended responses from surveys.
Results and Conclusion: Our results find generally positive attitudes, high levels of trust, accompanied by moderate levels of concern among general population participants for AI usage for health in Africa. This contrasts with expert responses, where major themes revolved around trust/mistrust, AI ethics concerns, and systemic barriers to overcome, among others. This work presents the first-of-its-kind qualitative research study of the potential of AI for health in Africa with perspectives from both experts and the general population. We hope that this work guides policy makers and drives home the need for education and the inclusion of general population perspectives in decision-making around AI usage.
View details
Preview abstract
As machine learning (ML) systems see far-reaching applications in healthcare, there have been calls for fairness in machine learning to understand and mitigate ethical concerns these systems may pose. Fairness has thus far mostly been defined from a Western lens, and has implications for global health in Africa, which already has inequitable power imbalances between the Global North and South. This paper seeks to explore fairness for global health, with Africa as a case study. We propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. This serves as a basis and call for action for furthering research into fairness in global health.
View details
Preview abstract
As machine learning (ML) systems see far-reaching applications in healthcare, there have been calls for fairness in machine learning to understand and mitigate ethical concerns these systems may pose. Fairness has thus far mostly been defined from a Western lens, and has implications for global health in Africa, which already has inequitable power imbalances between the Global North and South. This paper seeks to explore fairness for global health, with Africa as a case study. We propose fairness attributes for consideration in the African context and delineate where they may come into play in different ML-enabled medical modalities. This serves as a basis and call for action for furthering research into fairness in global health.
View details
Machine learning for healthcare: A bibliometric study of contributions from Africa
Houcemeddine Turki
Anastassios Pouris
Francis-Alfred
Michaelangelo Ifeanyichukwu
Catherine Namayega
Mohamed Ali Hadj Taieb
Sadiq Adewale Adedayo
Chris Fourie
Christopher Brian Currin
Atnafu Lambebo Tonja
Abraham Toluwase Owodunni
Abdulhameed Dere
Chris Chinenye Emezue
Shamsudden Hassan Muhammad
Muhammad Musa Isa
Mohamed Ben Aouicha
Preprints (2023)
Preview abstract
Machine learning has seen enormous growth in the last decade, with healthcare being a
prime application for advanced diagnostics and improved patient care. The application of machine learning for healthcare is particularly pertinent in Africa, where many countries are resource-scarce. However, it is unclear how much research on this topic is arising from African institutes themselves, which is a crucial aspect for applications of machine learning to unique contexts and challenges on the continent. Here, we conduct a bibliometric study of African contributions to research publications related to machine learning for healthcare, as indexed in Scopus, between 1993 and 2022. We identified 3,772 research outputs, with most of these published since 2020.
North African countries currently lead the way with 64.5% of publications for the reported period, yet Sub-Saharan Africa is rapidly increasing its output. We found that international support in the form of funding and collaborations is correlated with research output generally for the continent, with local support garnering less attention. Understanding African research contributions to machine learning for healthcare is a crucial first step in surveying the broader academic landscape, forming stronger research communities, and providing advanced and contextually aware biomedical access to Africa.
View details
A framework for grassroots research collaboration in machine learning and global health
Christopher Brian Currin
Chris Fourie
Benjamin Rosman
Houcemeddine Turki
Atnafu Lambebo Tonja
Jade Abbott
Marvellous Ajala
Sadiq Adewale Adedayo
Chris Emezue
Daphne Machangara
Mennatullah Siam
International Conference on Learning Representations (2023)
Preview abstract
Traditional top-down approaches for global health have historically failed to
achieve social progress (Hoffman et al., 2015; Hoffman & Røttingen, 2015). However, recently, a more holistic, multi-level approach, One Health (OH) (Osterhaus
et al., 2020), is being adopted. Several challenges have been identified for the
implementation of OH (dos S. Ribeiro et al., 2019), including policy and funding,
education and training, and multi-actor, multi-domain, and multi-level collaborations. This is despite the increasing accessibility to knowledge and digital research
tools through the internet. To address some of these challenges, we propose a general framework for grassroots community-based means of participatory research.
Additionally, we present a specific roadmap to create a Machine Learning for
Global Health community in Africa. The proposed framework aims to enable any
small group of individuals with scarce resources to build and sustain an online
community within approximately two years. We provide a discussion of the potential impact of the proposed framework on global health research collaborations.
View details
If you build it, they will come…or not; Considerations for Women’s Health in the post-pandemic era of Digital Innovation
Martina Anto-Ocrah
Simrun Rao
Stefanie_Hollenbach
Lindsey DeSplinter
Frontiers in Public Health (2022)
Preview abstract
Culture has been defined as “an internalized and shared framework through which both the individual and the collective experience the world”. Cultural processes shape social institutions, and mold – while in turn being molded by –members of a given cultural or subcultural group. The norms that are created by culture can have important implications for health outcomes, as culture can shape one’s recognition, interpretation and acceptance of “disease” and “wellness”. In this era of rapid digital growth and democratization, without considering and understanding what the notion of “disease” or “wellness" means to a group of people, digital health platforms may not be used as intended, and risk failing.
In this paper, we use examples from digital innovations in women’s health across different cultures to discuss notions of i) disease, ii) wellness, iii) care seeking decisions, iv) competitors and acculturation. Aligning with the World Health Organization’s call to rigorously evaluate eHealth solutions to ensure that digital investments are not being diverted from non-digital approaches, we use this paper to urge digital scientists to explore what these constructs mean to the end user first before development and/or implementation; so that digital innovations in women’s health are used as designed and intended; lest they risk failing.
View details