Blog
The need for more and better cost-effectiveness data to inform government delivery of foundational learning programmes
Christine H. Beggs (Alternatives in Development), Clio Dintilhac (Gates Foundation), Michelle Kaffenberger (What Works Hub for Global Education)
What does it cost for governments to deliver effective foundational learning programmes at scale?
The short answer is, we don’t know. Unfortunately, that’s pretty much the long answer also.
The existing, limited cost evidence for foundational learning programmes is noisy, often not relevant across contexts, and typically not tailored to help governments and their partners optimise investments. Several initiatives have called for expanded collection of cost data in a consistent way and organisations have been investing in synthesising the existing cost-effectiveness data. This reflection brief builds on these efforts while emphasising two priorities: we need more cost data that is relevant to scaling government foundational learning programmes and we need more cost data tied to impact estimates to inform the design of government programmes.
There are limitations on the availability of reliable cost data.
A cost benefit analysis of effective literacy programmes estimated a marginal per student cost of $8 for a structured pedagogy programme design and a marginal per student cost of $20 for a Teaching at the Right Level (TaRL) programme design[1]. These are useful estimates, but the sources for these estimates are limited and the estimates mask variations in programme designs and context and do not use comparable cost capture and cost analysis methods. Several comprehensive cost reviews of foundational literacy programmes have been conducted, with one finding a cost range of $41.41 to $1.64[2] per student whilst another analysis found per student costs ranging from $128.07 to $3.87.[3] Some of this variation is due to differing price levels across contexts and variations in programme designs. However, differing cost capture approaches, lack of consistency and transparency in analytical methods, and an over-reliance on data from pilot programmes also contribute to the reported variations. The cost evidence for foundational numeracy programmes is even more limited than for foundational literacy programmes.
We need more cost data that is linked to impact data.
Thanks to an increase in impact evaluations in recent decades, we now have good evidence about categories of interventions that work to improve foundational literacy outcomes, such as structured pedagogy and Teaching at the Right Level (TaRL) models. While this is positive news on the effectiveness data front, without cost data to complement impact estimates, we do not have the information that we need to make better decisions.
Furthermore, we lack detailed data on which programme elements contribute the most to impact and cost. Without this level of nuance, it is difficult to design programmes with improved cost-effectiveness ratios that fit into government resource envelopes.[4] For example, addressing questions about the impact and cost of different language of instruction policies or the relative cost-effectiveness of different training and coaching models is critical for progress at scale.
As such, today, we still find ourselves without the essential information needed to improve foundational learning outcomes at scale. This gap needs to be filled.
Why does this matter? It matters for several reasons…
- The nearly US$100 billion-dollar gap in annual education funding is a good place to start.[5] With this kind of deficit, governments don’t have the luxury of unknowns and inefficiencies. They need information that will allow them to consider policy and programme options and make informed decisions. They need robust data on the effectiveness of foundational learning designs and what these designs will cost as delivered through their systems.
- In 2021, for the first time, government contributions to education spending in low-income countries reached 50%, with contributions from households representing the next largest share of expenditures for education.[6] Overseas development assistance (ODA) brings up the rear, contributing just 13% of the total education expenditures in 2021. This indicates a growing need for cost data this is relevant to government budgeting, and yet, the great majority of cost-effectiveness studies focus on donor-funded and partner-implemented programmes – resulting in skewed effectiveness data and pricing – and findings that are not generalizable to government systems.
- Governments can rarely afford comprehensive pilot designs. They need effectiveness and cost data organised in a way that facilitates decisions about which components to keep and which to let go or plan for in the future.
- Hundreds of millions of children are still being left behind – not learning every day. This crisis will only be fully addressed when governments deliver effective foundational learning programmes at scale – and they need robust cost effectiveness evidence to do this. The development community has a responsibility to make this happen.
Where are we now and why…?
- Very few programmes have cost or cost-effectiveness studies [7] , and those that do are usually focused on pilot programmes delivered by implementing partners which do not translate well to government costs or decision-making needs. The cost-effectiveness data that we do have is not designed to transfer across contexts, despite evidence that impact and cost estimates do not generalise well – as evidenced by an analysis of a remedial education programme that found cost-effectiveness can range from 4.3SDs to .38SDs per $100 simply based on teacher salary variations across countries.[8]
- Most current estimates are not relevant for government education budgets. The cost-per-child targets are designed with donor resource envelopes and aspirations at the center. Retrospective “adjustments” to bring donor-funded technical assistance programme models in line with available government resources have been generally unsuccessful.
- Current practice for costing government delivery of foundational learning programmes at scale is typically a budgeting exercise, usually in support of education sector plan development or in the pursuit of large education development grants. These exercises produce high-level estimations of the resources required to support education in general and fall short of budgeting for specific policy or programme designs relevant to improving foundational learning outcomes. When detailed programme costing work is done, it is often led by international experts and not integrated into government planning or well-connected to financing decisions.
- The political economy of education financing is complex, with linkages to political interests and compliance with donor requirements – heightening the risk of incentives that can threaten the production of reliable cost estimates and limit the use of cost evidence.
The barriers to the production and use of high-quality cost effectiveness evidence stem from both the demand and supply side.
On the demand side, challenges include:
- Variable demand for cost evidence from policy makers and funders, in part driven by the high cost of robust effectiveness studies.
- When governments do have cost evidence, the cost data do not map well to government systems, and it is a challenge to connect it to financing decisions.
- A lack of incentives, such as from funders or academic journals, for researchers to produce cost analyses in combination with their effectiveness studies.
- Cost-effectiveness data often come later than needed for design and policy decisions and lack the nuance and generalizability that would increase demand.
- Varied political and financial interests at the international and national level can limit the utilisation of empirical cost-effectiveness data.
On the supply side, challenges include:
- Pilot programmes do not capture cost data in a manner that can be modeled at government prices.
- Standards for modeling government costs are underdeveloped and not well understood.
- Partners and governments are reluctant to share cost data.
- Cross ministerial systems make gathering and analysing cost data challenging.
- Cost-effectiveness data are centered on whole programmes and do not include information about the relative cost-effectiveness of individual programme components.
- Methods to model cost and impact estimates across contexts and systems are limited.
Advancing this agenda requires us to consider avenues to reduce the barriers and increase the benefits of cost-effectiveness evidence work.
One of the aims of this reflection brief is to frame upcoming conversations about how actors in the education sector can expand the availability, relevance, and use of high-quality cost-effectiveness evidence for foundational learning programs delivered at scale by governments. A few avenues to consider, in partnership with funders, implementers, researchers and governments, include:
1) Expanding the empirical evidence about the cost and cost-effectiveness of foundational learning programs as delivered through government systems, including cost and effectiveness estimates at the component level.
2) Advancing methodological work on estimating the cost and effectiveness of government delivery of foundational learning programs, with an emphasis on relevance, generalizability, and transparency.
3) Facilitating a stronger connection between cost-effectiveness analyses, program designs, and financing decisions.
We look forward to your feedback and continuing this conversation.
—
[1] Estimates are weighted averages across low- and middle-income countries. Angrist, N. et al., (2023). Improving Learning in Low- and Lower-Middle-Income Countries. (Only direct partner costs included.)
[2] Harris Van Keuren, C. (2023). Learning @ Scale: Time and Money. (Only partner costs included.)
[3] Venetis, E., Harris Van Keuren, C. (2024). Costs across Contexts: A Cross-Country Cost Analysis of Seven USAID-Funded Education Programs. (Only partner costs included.)
[4] Few studies test the cost effectiveness of different program components, but one example is Piper, B., et al., (2018). Identifying the essential ingredients to literacy and numeracy improvement: Teacher professional development and coaching, student textbooks, and structured teachers’ guides, World Development, Volume 106, pages 324-336.
[5] The World Bank and UNESCO (2023). Education Finance Watch 2023. Washington D.C., Paris: The World Bank and UNESCO.
[6]Ibid.
[7] A review of cost-effectiveness studies in international development (Brown & Tanner, 2019) found that less than 20% of impact evaluations in international development (across all sectors) included cost data. This generally aligns with data from the 2023 Global Education Evidence Advisory Panel (GEEAP) report that included 325 new studies, and of those studies only 91 reported on cost (there was no inclusion criteria for the cost reporting and cost analysis transparency was variable). There are exceptions, see Harris Van Keuren, C., (2023) for examples of government cost studies as part of the Learning at Scale study funded by the Gates Foundation.
[8] Evans, D. and Popova, A. (2014). Cost-Effectiveness Measurement in Development: Accounting for Local Costs and Noisy Impacts. World Bank Policy Research Working Paper No. 7027.
Discover more
What we do
Our work will directly affect up to 3 million children, and reach up to 17 million more through its influence.
Who we are
A group of strategic partners, consortium partners, researchers, policymakers, practitioners and professionals working together.
Get involved
Share our goal of literacy, numeracy and other key skills for all children? Follow us, work with us or join us at an event.