image

We share key takeaways from a conversation convened by the WHO Alliance for Health Policy and Systems Research on the responsible and equitable use of artificial intelligence in health policy and systems research. The piece highlights concrete opportunities and essential safeguards (governance, equity, and data sovereignty), with contributions from Gabriel Rada, Executive Director of the Epistemonikos Foundation.

28|DECEMBER|2025

Responsible artificial intelligence for health policy and systems research: real opportunities, with clear safeguards

We share key takeaways from a conversation convened by the WHO Alliance for Health Policy and Systems Research on the responsible and equitable use of artificial intelligence in health policy and systems research. The piece highlights concrete opportunities and essential safeguards (governance, equity, and data sovereignty), with contributions from Gabriel Rada, Executive Director of the Epistemonikos Foundation.

The WHO Alliance for Health Policy and Systems Research convened a group of experts in late September 2025 in Montreux, Switzerland, to discuss how to integrate artificial intelligence (AI) responsibly and equitably into health policy and systems research (HPSR), especially in low- and middle-income countries.

Gabriel Rada, Executive Director of the Epistemonikos Foundation, was among the experts invited to contribute from his experience both as an HPSR researcher and through Epistemonikos’ developments—particularly the Sustainable Knowledge Platform and the End-to-End Evidence (E2E Evidence) system—which use advanced AI with a strong emphasis on transparency and methodological rigor.

Why AI matters in HPSR

Unlike other fields, equity and participatory approaches play a central role in HPSR and in public policy processes. For that reason, AI can be highly useful, but its adoption requires additional safeguards to avoid widening existing gaps and to ensure legitimacy.

Which opportunities look most promising

The meeting highlighted AI’s potential to:

  • Expand capacity: lower barriers to entry for small teams and enable mentoring, peer learning, and multilingual collaboration.

  • Strengthen evidence-to-policy translation: support multilingual policy briefs and more accessible products—while recognizing that human judgment and political negotiation remain central.

Three conditions for responsible use

A cross-cutting point was that AI must respond to real needs of the health system and research work, with an explicit focus on equity. Within that framing, three critical themes emerged:

  • Governance: clear rules on how these tools are built, used, and evaluated.

  • Equity: preventing AI from deepening inequalities in capacity, access, or representation.

  • Data sovereignty: reducing dependence on a small number of global platforms by strengthening regional networks and shared frameworks.

Epistemonikos’ contribution: AI with methodological rigor

Gabriel Rada shared how systematic reviews are already being accelerated by AI, underscoring an essential methodological principle: if AI is used to “summarize studies” without a rigorous method, results are not reliable; AI must be integrated into rigorous methods, not replace them.

This perspective aligns with our Sustainable Knowledge vision: innovation that improves efficiency and access without compromising quality, traceability, and bias control.

How we implement this at Epistemonikos: from search to study selection, with AI and methodological control

At Epistemonikos, we already apply AI in critical stages of the evidence synthesis process—especially where much of the cost and time concentrates. For example, our tools combine automation support with explicit methodological criteria to improve efficiency without losing transparency.

We use AI to support the development of search strategies, assist with study selection, support risk of bias assessment, and facilitate data extraction. In all cases, the principle is the same: AI accelerates and organizes, but final decisions and methodological validation remain with the review team.

A concrete milestone: the Alliance’s AI-powered search tool

The consultation coincided with the launch of the Alliance HPSR search tool, an AI-assisted search tool to find and understand evidence in Alliance-supported publications and open-access articles. The platform itself warns that AI-generated responses may be incomplete or inaccurate, and it recommends always verifying the context and the original sources.

What’s next

The Alliance noted that this meeting is an initial step: the learnings will inform a report to be published in 2026 on how to deploy AI responsibly and equitably in HPSR.

Links of interest 
https://ahpsr.who.int/newsroom/news/item/06-11-2025-responsible-use-of-ai-for-health-policy-and-systems-research