top of page

Health and AI: What’s Working, What’s Not, and How to Tell the Difference

  • Writer: Waweru Chris Avram
    Waweru Chris Avram
  • Oct 24, 2025
  • 5 min read

Health and AI: What’s Working, What’s Not, and How to Tell the Difference

 

Health innovation is booming. AI triage chatbots, drone delivery, point-of-care diagnostics, digital adherence tools, you name it. But in public health (and especially in resource-constrained settings), the key question isn’t “is it cool?” it’s “will it work here, at scale, for real people, at a justifiable cost?

 

Let’s take a deep dive into a practical, evidence-anchored guide to where AI is delivering, what blocks adoption (particularly in Africa), and the signals that an AI tool is ready for scale.

 

Success Signals: How to Judge if an AI tool is Ready

 

One of the clearest ways to separate hype from real impact in health AI is to look at the signals that an innovation is ready for responsible adoption and scale.

 

A strong indicator is guideline inclusion or endorsement, where tools appear in reputable frameworks such as the World Health Organization’s (WHO) recommendation of computer-aided detection (CAD) for TB screening and triage, or have obtained regulatory clearance from agencies like the CE or FDA. In Kenya, TB screening vans in high-burden counties like Kisumu and Nairobi have started using these AI-powered chest X-rays to reach communities where radiologists are scarce. Instead of waiting days for results, patients can now get screened on the spot, with the AI flagging likely cases for confirmatory testing.

 

But guidance alone isn’t enough. The gold standard is prospective, real-world evidence, the kind of data that comes from large, multi-country trials. In Sweden, researchers showed that AI-assisted mammography cut radiologists’ workload nearly in half without missing cancers. Africa is catching up: South Africa has been piloting similar tools to address long diagnostic backlogs in public hospitals, offering hope that AI could help overburdened systems deliver faster results.

 

Transparency is another hallmark of trustworthy AI. Health workers and policymakers need clear performance metrics and calibration plans, how sensitive or specific a tool is, how it handles uncertainty, and how it can be adjusted for local populations. In Uganda, TB programs have begun calibrating CAD software to local prevalence rates, ensuring the technology doesn’t overwhelm clinics with false positives while still catching cases that matter. WHO’s TB CAD toolkit, which helps countries fine-tune screening thresholds, is fast becoming a reference guide.

 

Then comes the question of fit within the health system. The best AI tools don’t sit in isolation; they are designed to plug into existing platforms like electronic medical records, imaging archives, or the DHIS2 (District Health Information Software 2), a free and open source health management data platform used by multiple organizations, including the European Union (EU), and governments worldwide to collect, validate, analyze, and present aggregate or individual health data. In Nigeria, pilot and NGO-led projects have tested AI-driven decision support tools linked with DHIS2 to flag maternal health risks in rural clinics, helping community health workers respond more quickly, integration at the national level is however yet to take hold.

 

Offline functionality is also critical. AI needs power, stable networks, and devices, yet about 15% of health facilities in sub-Saharan Africa have no electricity at all , and only 50% of hospitals report reliable electricity, a hard ceiling on digital health scale-up. Global estimates suggest 1 billion people are served by facilities with no or unreliable power. In Turkana County, Kenya, AI diagnostic apps used by mobile health teams are designed to keep running even when internet connections drop, because in these areas, connectivity can vanish for days.

 

Equally important is governance. Many countries are still building capacity to evaluate AI medical devices, handle post-market surveillance, and align with international standards. Safe to say that without strong guardrails, AI risks eroding trust rather than building it. That’s why WHO has pushed for global standards around ethics, regulation, and human oversight through the WHO’s 2023 regulatory brief outlining pathways and expectations for evidence, documentation, oversight, and providing helpful templates for national authorities. Programs in Ethiopia and Rwanda have started embedding these safeguards into their national digital health strategies, making it clear that AI is meant to support, not replace, human clinicians.

 

And finally, there’s the money question. Pilots often look promising, but unit economics that survive scale are what matter. They run the risk of hidden costs, cloud inference, device refresh cycles, data labeling, and model recalibration which often exceed license fees. This means that factoring in electricity, connectivity, data hosting, and ongoing technical support, not just upfront license fees is crucial. In sub-Saharan Africa, where nearly half of hospitals report unreliable power, this is not a small challenge. Kenya’s Ministry of Health, with support from donors, has invested in solar power and back-up batteries across rural health facilities to strengthen digital health systems more broadly, laying the groundwork for electronic records, diagnostics, and, eventually, AI tools. As one program manager put it: “The algorithm is the easy part; the hard part is keeping the lights on.”

 

Where AI Is Delivering Today

 

In 2021, WHO formally recommended computer-aided detection (CAD) software to interpret digital chest X-rays for TB screening and triage backed by an operational handbook and calibration toolkit to help programs set safe thresholds for local use.

This moves CAD from “promising” to “recommended,” a strong signal of maturity.  

 

Why this matters in Africa: Community screening programs in high-burden settings have used CAD to expand reach where radiologists are scarce, while WHO’s guidance details how to implement and calibrate CAD responsibly (e.g., to balance sensitivity vs. workload). 


Large prospective trials in Europe show AI-assisted mammography can maintain safety while reducing workload and, in some analyses, improve detection. A recent Swedish randomized study reported a 44% reduction in radiologist readings with AI support, suggesting measurable productivity gains without compromising detection.  

 

  1. Responsible use guidance is catching up

WHO has issued a suite of guidance documents that are highly relevant to buyers, implementers, and regulators:

 

 

Bottom line: AI in health is not a silver bullet, but it is steadily proving its worth where conditions are right. The strongest gains so far, like AI chest X-rays for TB in Kenya and other high-burden countries, or AI-assisted mammography trials in Europe and early African pilots, show that when systems are well-calibrated, integrated into existing workflows, and backed by reliable infrastructure, they can make healthcare more efficient and accessible. The barriers—power, connectivity, regulation, and trust are real, but they are not insurmountable. The lesson for policymakers and health leaders is clear: invest first in the basics, demand transparency and real-world evidence, and plan for scale from day one. The general consensus is that AI will not replace doctors, nurses, or community health workers in Africa or anywhere else, but when deployed responsibly, it can give them sharper tools, faster insights, and, ultimately, more time to focus on patients.

 
 
bottom of page