Caroline Fiennes on why we need to determine where and what a problem really is before we start writing cheques to try to solve it.
Let’s start in Nepal. As you know, Nepal is very mountainous and there are many development needs supported by international aid and charitable funding. One study found that the bulk of aid money sent to Nepal goes to people and projects in the valleys. This is because the valleys have more roads so lower administrative costs compared to higher up the slopes, where access is harder. But because the valleys are easier to reach, the land there is more expensive. So, guess where the poorest people live? They are up in the hills, where land is cheaper, precisely because it is harder to get there and the money goes almost precisely where the poorest people are not.
Meanwhile, in India, where learning levels in public schools are often dreadful, donors like to fund inputs: more teachers, more desks and more books. But what effect do those inputs have on learning levels? Often not much. This seems counter-intuitive but the reason is that the Indian education system doesn’t work as we expect.
For example, In India, teachers are badly paid so absenteeism is high. If there is no teacher, it doesn’t matter how many desks there are. Also, studies show that teachers often teach the top two or three students in a class, with the others more or less left behind, which seems to be a legacy of British colonialism, which sought to produce a few bureaucrats for the administration machine.
That is the system in India. Elsewhere it is different. In Vietnam, for example, teachers teach every child, so the gap between the top and bottom learners in a class is smaller, possibly a legacy of communism? I mention this because people often think that stories from one resource-constrained place apply to all of them, which isn’t true. So, in India, simply proving more inputs has little effect on learning.
Both these examples underline how important it is to really understand the problem that you are trying to solve – and to remember that just because a programme sounds like a good idea, it may not be in practice. Indeed, it may even exacerbate a problem.
Let’s look at a programme designed to reduce the incidence of teen pregnancy. It gave ‘at risk’ teenagers an electronic doll to care for with the idea of exposing them to the challenges of parenting. The programme’s effect was surprising. Australian researchers tracked girls aged 13-15 years at 57 schools. Some of them were given these dolls and others weren’t. The researchers then followed the girls until they reached the age of 20, linking school data to hospital records and abortion clinics.
According to a study published in the Lancet, the girls who got the dolls were 36 percent more likely to become pregnant than girls in the control group: that is, the programme did the exact the opposite of what was intended.
In short, donors need to do their homework. This is not just to avoid missing opportunities to help as many people as possible, but also to avoid creating new problems and exacerbating existing ones.
"Donors need to do their homework… to help as many people as possible… (and) to avoid creating new problems and exacerbating existing ones."
My organisation, Giving Evidence, works with donors to help them to give effectively. That involves finding, understanding, and applying evidence to inform where, what and how they give.
Although 'evidence-based giving' can sound very complicated, it is not always. Let me give you an example. Some years back, we worked with a couple of Dutch ladies, each with a family foundation, who were interested in improving girls’ education in several countries, including the Philippines and Indonesia. They were expecting more girls to be out of school than boys, and that the girls would perform less well in schools, and for this reason, were considering funding catch-up programmes for the girls.
When we looked into it, we found that, in Indonesia, there was no difference between how the girls and boys were doing. This told us that girls’ participation was not a problem. However, education levels in Indonesia for both girls and boys are a long way below the OECD average. So the problem with education in Indonesia is education, not gender.
This evidence-gathering barely took a morning but prevented those foundations from wasting their money. Instead, we directed them to where their funds would be more useful.
One of the best types of research to use is called a systematic review, which offers a satellite view, pulling together all the existing evidence related to a particular topic. It is not a good idea to restrict yourself to a single study, because different studies of the same thing in different times and different places can find different answers. For these reasons, it is important to take the combined view. Here is an example of a visual evidence map and here is a systematic review.
It is also essential to make sure you know who has done the research and why. It is, unfortunately, very common for operational charities (e.g. grantees) to evaluate their own effectiveness. Such reports are often unreliable.
There are various reasons for this. The first is that most operational nonprofits are not skilled in running social science research: they typically don’t have the skills to reliably identify the effects of their programmes. Why would they? Being good at, say, running a domestic violence refuge is a different completely skill to running research. And second, most operational charities serve too few people for studies just of them to be reliable. In the jargon, their sample sizes are inadequate to give results that are statistically significant. [More about this here.]
There is also a problem with incentives if charities are asked to produce evaluations on which they will be judged. Clearly, few organisations would confess to having made a small effect – or even no effect - with someone else’s money. Certainly, when I was a chief executive of a charity, I took to funder meetings the graphs that went up; the ones that went down, I never showed anybody.
In 2012, the UK-based Paul Hamlyn Foundation looked at the quality of the evaluations it had received from its grantees over a five-year period. Their study found that not much of the research was good and quite a lot of it was quite poor – despite the foundation being generous with its criteria there.
Another example comes from the Arts Alliance, a coalition of arts organisations working in the UK criminal justice system. It has an ‘evidence library’ containing evaluations of arts-based practice and in 2013, it had 86 evaluations – but a study found that only four of these met the quality criteria for inclusion in a government ‘rapid evidence assessment’.
To this point, New Philanthropy Capital, a UK-based social sector thinktank and consultancy, asked British charities why they did impact research. Mostly, they said it was because their funders or board had asked them to, or because they had been given funding specifically for the task.
Just seven percent said it was to improve their practice, which makes my point. By analogy, when I play tennis, I do not look to see whether my serve goes in so I can report back to somebody else that it did. I want to know that myself, in order that I can improve.
Without impartial research done by skilled researchers to inform grant decisions, you risk rewarding interest-driven initiatives, or worse, supporting exaggerated, dishonest claims, that will lead you to waste your money and fail the people you are trying to help.
“Base your giving on good evidence… Don’t just guess and hope.”
So, what happens when there is no available research to inform the cause or population you want to support? One option is to just guess and hope, which is basically a form philanthropy roulette, and is unlikely to end well.
The other option is to use your philanthropy to produce evidence: fund the production of that research / data / evidence. We call this ‘evidence-generating giving’.
For example, if you want to support an innovative programme, you could usefully fund production of rigorous research to identify its effects. But rather than asking the nonprofit to evaluate it, better to fund an independent and skilled evaluator. That should avoid the problems around incentives, and skills gaps.
To get around small sample sizes, researchers may look at the programmes of multiple-delivery organisations. An example of this is when the Education Endowment Foundation wanted an evaluation to identify the effects of pre-school ‘breakfast clubs’, it hired the UK’s Institute of Fiscal Studies, to survey more than 100 schools. The results are here.
You could start by finding and using an Evidence and Gap Map (EGM): it will show you where relevant research already exists, which you can use, and also where it is still lacking.
Dubai Cares, a $1bn education foundation set up by Sheikh Mohammed bin Rashid Al Maktoum, the ruler of Dubai, divides its budget into two. One part goes to: evidence-based giving, supporting work known to be effective; and the other to: evidence-generating giving, to research, pilots, and policy studies, which will support the development of future programmes by themselves, or others.
Another example of evidence-generation is Pratham, one of India’s largest NGOs, which is also focused on education. In the mid-2000s, it realised that it did not know exactly where its work was most needed, or the regions in which children were most behind, and therefore it wasn’t sure where it should prioritise. In response, it started a nationwide survey to assess children’s learning levels. The Annual Status of Education Report (ASER) goes house-to-house (to ensure inclusion of children who are out of school) in every district of the country and publishes state-by-state results.
This data has informed Pratham’s programming and has also incentivised action by the state governments, because they do not want to be seen as low performing. "Aser" in Hindi and Urdu means "impact" – and that is what the ASER has had on education, and not just in India. Many other countries have since adopted the ASER approach to assessing learning needs and outcomes.
In summary: base your giving on good evidence. If that doesn’t exist for a particular sector or geography, fund the production of evidence. Don’t just guess and hope.
About the writer
Caroline Fiennes is the founder and director of Giving Evidence, which encourages and enables giving based on sound evidence through advisory and research. A Cambridge University visiting fellow, she is the author of “It ain’t what you give, it’s the way that you give it” and is a regular contributor to international media outlets writing on philanthropy and charitable giving.