As AI takes over healthcare, will it bridge the gaps in care — or make them even wider for those already left behind?
AI is reshaping the medical field, offering new possibilities while raising concerns about worsening healthcare disparities for marginalised groups. In recent years, the medical field has increasingly embraced advanced technology. During the worst points of the COVID-19 pandemic, for example, over 50% of medical diagnoses shifted online. This evolution has involved the growing use of artificial intelligence (AI) to help diagnose and treat diseases. AI has the potential to make medical procedures more efficient and accessible to a wider population. It was introduced with the hope of reducing human biases in healthcare and making treatment more available for marginalised communities. However, new evidence suggests that the rapid adoption of AI in medical detection may instead worsen existing inequalities, raising serious concerns about its fair implementation in healthcare.
ChatGPT can accurately suggest diseases based on symptoms 88% of the time.
AI technologies are now used globally to detect various diseases, including different types of cancer. Many AI models have shown promising results; according to one study, public access tools like ChatGPT can accurately suggest diseases based on symptoms 88% of the time. In comparison, trained doctors achieve a 96% accuracy rate, while individuals without medical training only reach about 54%. While these results indicate that some AI models could be useful as initial diagnostic tools, it is concerning that these models are being introduced into hospitals without enough evidence of their effectiveness.
In some countries, only AI medical devices that pose a life-threatening risk to patients are required to undergo clinical trials before being used in medical settings. This lack of testing puts hospitals in a tough spot when deciding whether to adopt these technologies. In the United States, healthcare providers are often reimbursed by insurance companies for using AI tools, making them financially appealing. Additionally, with global shortages of healthcare workers, AI offers a way to ease some of the pressure on medical staff. As a result, AI is increasingly replacing human input in certain medical contexts, leading to some positive outcomes, such as reduced wait times. However, questions remain about whether the AI transition is occurring too quickly, without enough research or adjustments, raising important concerns about its impact on healthcare equality.
Studies have shown that AI models often have a disproportionate effect on women and people of colour. For instance, during the COVID-19 pandemic, one AI model in the United States misdiagnosed melanomas at a significantly higher rate in people of colour. The sources of these biases are not always easy to identify. For example, a study in the United States used an AI model to predict which hospital patients would need additional care. The model relied on healthcare costs as a way to estimate care needs, with the costs of healthcare bills a patient paid in the past being used to estimate the level of care they may need in the future. This resulted in a significant underestimation of the care required for African American patients, compared to their white counterparts. The issue was not that these patients needed less care, but rather that they had experienced limited access to healthcare in the past, hence they had incurred lower healthcare costs in life.
Additionally, some AI medical detection tools can identify a person's race just by analysing chest X-rays. This raises concerns about whether these models are using "racial shortcuts" in their diagnostic processes. AI models in healthcare typically rely on large datasets, training programs, clinical notes, public health records, and institutional policies. When these inputs contain biases, the resulting AI models typically reflect and reinforce them. For example, biases found in clinical notes from previous patient interactions can be absorbed by AI systems, affecting their decision-making.
As AI becomes more common in medical detection, it risks amplifying existing biases, leading to worse healthcare outcomes for marginalised groups.
Moreover, the large datasets used to train AI models often leave out individuals with less access to healthcare. Clinical trials and datasets are frequently skewed toward populations that receive adequate healthcare, i.e. patients that are white, educated and wealthy. This means that AI models are better trained to diagnose those who already receive quality care. Since the risk factors for diseases can vary widely across different groups, AI may struggle to diagnose conditions accurately in underrepresented populations. As AI becomes more common in medical detection, it risks amplifying existing biases, leading to worse healthcare outcomes for marginalised groups. This is especially concerning when used to identify life-threatening conditions.
There is an urgent need for increased testing and retraining of AI models to ensure they do not perpetuate or worsen systemic inequalities. Studies have attempted to retrain AI models and it can be effective for a very specific group of individuals, for example individuals of a specific hospital. However when the same model is applied to different groups, the disparities frequently return, highlighting the complexity of addressing bias in AI healthcare models.
To tackle these challenges effectively, a multifaceted approach is essential. This includes diversifying the data used to train AI models to better represent the full spectrum of the population. By including data from underrepresented groups, AI models can be trained to recognize and address the unique risk factors and symptoms that may not be captured in existing datasets. Furthermore, ongoing evaluation and validation of AI tools in various clinical settings are crucial to ensuring their effectiveness and fairness across populations.
Collaboration among stakeholders — healthcare providers, technology developers, policymakers, and community organisations — will also play a vital role in addressing the disparities caused by AI in medical diagnosis. By fostering open dialogue and sharing best practices, these groups can work together to create guidelines for the ethical development and implementation of AI technologies in healthcare.
While AI holds great promise for revolutionising medical diagnosis and treatment, its rapid adoption must be approached with caution. As we integrate AI into healthcare, it is crucial to prioritise equity and inclusivity, ensuring that technological advancements do not deepen existing disparities. By actively addressing biases, improving representation in AI training data, and committing to rigorous evaluation, we can harness the power of AI to improve healthcare outcomes for all individuals, regardless of their background. Only through concerted efforts can we ensure that the future of AI in medicine promotes equality and enhances the quality of care for everyone.
