Understanding Impact: Innovative Financing and Program Design
New funding mechanisms bring new questions about financing and public health health programming.
Much of the buzz around innovative financing comes from its potential for catalyzing new sources of funding. While this may be true, new funding models also introduce new stakeholders and different incentives, making money a more dynamic and influential input in program design and theory. Development Impact Bonds (DIB) are one such example. While DIBs have been used to finance successful global health interventions, their influence on program design and impact relative to grant funding remains largely unknown.
Understanding the extent to which a funding model influences program design is crucial. Evidence suggests DIBs shift stakeholder focus to program outcomes and incentivizes innovation, promotes learning and reduces donor financial risk. DIBs have also been criticized for their complexity, high transaction costs, and potential for introducing perverse incentives. All of these factors can have a profound influence on a program’s impact.
Many of the criticisms attributed to DIBs also pertain to traditionally funded programs. Misaligned incentives, unnecessary complexity, excessive overhead, and other challenges can result from any number of contextual, structural, and political factors. Public health practitioners would rightly contend that the focus of any program should always be on outcomes, regardless of the financing mechanism used. At issue here, however, is not if these factors exist under one funding model and not the other, but to what extent they are influenced by the funding model selected.
Connecting a financing mechanism to program activities and impact presents a number of practical challenges. Program theory traditionally views funding as a fungible input. Like gas in a car, money makes the program ‘go’, but is not perceived as having a direct influence on program processes or performance.
Experimental methods to measure the effects of financing mechanisms may not be accurate or even possible. DIBs introduce new stakeholders, which in turn can influence the implementer selected, evaluation methodology, population targeted, duration or any number of other variables. An experiment comparing two implementations of a grant funded program may capture DIBs influence on program processes, but would fail to capture these important formative differences.
Understanding these so called DIB “design effects” will require a new evidence base, which will emerge as innovative financing models are piloted in the field. In early 2020, The Foreign, Commonwealth & Development Office (FDCO) released an initial formative evaluation that explores some of the early evidence. The evaluation compared DIB-funded program design data to that of similar programs funded using a grant. While the report only compared four pairs (eight programs total), it found similar “DIB effect” differences in all. However, a limited comparison of imperfectly-matched programs leave many important questions unanswered.
With public health funding in short supply, new sources of capital are needed. An evidence base demonstrating the relative strengths and weaknesses of different funding models can help donors, implementers, and community leaders ensure interventions are designed and executed effectively and responsibly, regardless of how they are resourced.