Home > Evidence & resources >New Building Evidence in Education (BE²) working paper: Responsible Artificial Intelligence (AI) for education evidence synthesis and use in low- and middle-income countries (LMICs)

Blog

20 February 2026

New Building Evidence in Education (BE²) working paper: Responsible Artificial Intelligence (AI) for education evidence synthesis and use in low- and middle-income countries (LMICs)

Authors:

Deborah Greebon

Suggested bibliographic citation: Greebon, D. 2026. New Building Evidence in Education (BE²) working paper: Responsible Artificial Intelligence (AI) for education evidence synthesis and use in low and middle income countries (LMICs). What Works Hub for Global Education. Blog. BL_2026/005. https://doi.org/10.35489/BSG-WhatWorksHubforGlobalEducation-BL_2026/005

Building Evidence in Education (BE²) has released a new working paper, Artificial Intelligence in Education Evidence Synthesis and Use in Low- and Middle-Income Countries. The paper responds to a fast-moving reality: AI is already reshaping how education evidence is found, synthesised, translated, and used. The core question is not whether AI will be used in the work of producing and applying evidence, but how to use it responsibly in ways that improve quality and access without reinforcing bias, inequities, or dependency.

This working paper is a rapid, analytical overview intended for education funders, evidence intermediaries, policymakers, and researchers working in LMIC contexts. It focuses on secondary research and evidence use by describing applications of AI across evidence synthesis tasks and evidence uptake activities, and it highlights governance and equity considerations for decision-makers.

AI is an ecosystem intervention

A key premise of the paper is that AI is not merely a tool added into an existing pipeline. It can reshape relationships, incentives, and power dynamics across the education evidence ecosystem. That matters especially in LMIC contexts, where structural barriers already shape which evidence is visible, whose knowledge is valued, and who has the capacity to translate evidence into decisions.

Where AI can help across the evidence cycle

The paper maps concrete AI use cases across two broad areas:

1. Evidence synthesis

AI is being used most credibly for structured, protocol-driven tasks such as systematic searching support, title and abstract screening, and early-stage extraction of pre-defined fields. These are areas where efficiency gains can be real, if paired with careful validation and human oversight. The paper also discusses other uses like AI-supported evidence mapping and gap analysis to help visualise the shape of a research field.

2. Evidence uptake and use

AI is also being explored to reduce persistent barriers between research and practice. Examples include assisted drafting of evidence products for decision-makers, multilingual translation, and conversational interfaces that let users query a curated evidence base. These applications are not equally appropriate. Some are high-stakes and demand strong safeguards. In some cases, the risks may outweigh the benefits.

The paper is explicit that AI introduces significant risks to evidence integrity and equity, and that mitigations can reduce risks but cannot eliminate them. Key concerns include:

  • Bias and exclusion driven by training data and search and ranking tools that over-weight English-language and Global North evidence, and under-represent local research and grey literature.
  • Inaccuracy and opacity, including confident-sounding errors and limited transparency into model behaviour.
  • Misinterpretation in context, including when tools miss local meaning or nuance.
  • Dependency and vendor lock-in, including uneven access to proprietary systems and potential extraction of value away from LMIC institutions.
  • Data privacy and intellectual property risks, particularly when confidential materials are uploaded into external tools without safeguards.
  • Environmental and labour costs associated with large-scale AI development and deployment.

Quality assurance and ‘good enough’ performance

Rather than framing AI as ‘better than humans’ or ‘worse than humans,’ the paper emphasises that what matters is the nature of errors, the stakes of the application, and whether processes are transparent and auditable. The same prompt can produce different outputs across runs or models, which creates methodological challenges for replicability and validity unless protocols are established and documented.

Five principles for responsible AI in education evidence

The paper proposes five principles to guide responsible, equitable engagement with AI in LMIC education evidence ecosystems:

  1. Invest in sustainable, foundational public infrastructure such as open tools, multilingual, representative datasets, open metadata, and persistent identifiers like DOIs for research and grey literature.
  2. Ensure robust data privacy and security through privacy-by-design tools, clear protocols, and consistency with data protection laws and policies.
  3. Govern early, proactively, and inclusively so that policies and decision rights are established before scale-up, with meaningful LMIC leadership and participation.
  4. Foster a culture of critical learning and adaptation including AI literacy, transparency about methods, and open sharing of what works and what fails.
  5. Strengthen LMIC leadership and capacity so that local actors increasingly shape, own, and steward tools and expertise, rather than remaining primarily consumers of externally developed systems.

Read the paper, use it, and tell us what you are learning

Explore the working paper and executive summary here.

If you would like to connect about collaborations, training on this topic, or interest in BE²’s new Special Interest Group on AI in research, please get in touch by emailing dgreebon@building-evidence-in-education.org.

Greebon, D. 2026. New Building Evidence in Education (BE²) working paper: Responsible Artificial Intelligence (AI) for education evidence synthesis and use in low- and middle-income countries (LMICs). What Works Hub for Global Education. Blog. BL_2026/005. https://doi.org/10.35489/BSG-WhatWorksHubforGlobalEducation-BL_2026/005

Discover more

Young female student with notebook. Photo by Apex 360, Unsplash.

What we do

Our work will directly affect up to 3 million children, and reach up to 17 million more through its influence.

Teacher sits on the floor with group of students. Photo by Husniati Salma, Unsplash.

Who we are

A group of strategic partners, consortium partners, researchers, policymakers, practitioners and professionals working together.

Children reading. Photo by Andrwe Ebrahim, Unsplash.

Get involved

Share our goal of literacy, numeracy and other key skills for all children? Follow us, work with us or join us at an event.

Loading...
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.