What FPR means by analysis of a foundation’s effectiveness

 

Author: Caroline Fiennes, Giving Evidence.

One criterion in the Foundation Practice Rating asks whether foundations “publish any analysis of its own effectiveness (this is effectiveness of the foundation not analysis from the grantees of what they are doing with the funding)”

This article sets out what we mean by analysis of the effectiveness of a grant-maker. 

What we mean by effectiveness of a grant-maker

Clearly, grant-makers do not (normally) run programmes. So their effects are vicarious, through the organisations whom they fund: foundations do not vaccinate people or run shelters for homeless people or teach children to read: rather, it’s the grantees who do that. Identifying a funder’s impact can therefore be hard.

It’s nonetheless useful to ask whether a funder is doing a good job of being a funder. Further, we would argue that it’s important to ask that question, in order that funders learn where and how to improve – at being funders. 

An effective funder would (perhaps among other things):

  • Select important causes / needs
  • Find strong organisations to support. In some circumstances, that might mean funding organisations which are already strong; in others, it might mean bringing new organisations into being, or supporting organisations into new geographies, or supporting them to develop new capabilities. 
  • Fund well. So, for instance, fund without masses of administrative burden (Giving Evidence has studied and written about the administrative burden of many application processes, and how those can be reduced). 
  • Be considered by its grantees to be helpful, reasonable and a good partner. 
  • Understand which of its grants worked, and why, and which didn’t, and why not, and be interested in the patterns there, and be looking to improve its decision-making processes. In other words, when it makes ‘yes/no’ decisions – which are at the heart of most funding operations – does it say ‘yes’ on the right occasions? Equally, does it say ‘no’ on the right occasions: so it might also be interested in what happens to work that it declines to fund, and whether that nonetheless happens and succeeds and what it can learn from that. {NB, none of this precludes the funder from taking risk. Indeed, it enables risk-taking because it helps to show how risky something is.}

Why this matters 

Funders should get good at funding! Funding well is not trivial. 

And though there has been increasing scrutiny of the performance of operational non-profits – often driven by funders, on whom they rely – there has been much less scrutiny of the performance of funders – presumably because most funders don’t rely on anybody else. (Caroline Fiennes, who runs Giving Evidence and the FPR research operation, wrote about this in the scientific journal Nature.)

Only by analysing what they do well vs. less well can funders learn to improve, and to make their scarce resource achieve more. 

What analysis of a funder’s effectiveness can look like

Any grant-maker could answer the following three questions, which can provide useful insights for any funder:

    1. How many grants achieve their goals? (We could call this their hit rate). Logging the goal of every grant and tracking whether these goals were met would be a big step forward. Then they can try to find patterns in those hits and misses.
    2. What proportion of funds are devoted to activities such as preparing proposals or reports for the foundation?
    3. How satisfied are grantees with the foundation?

There are further questions which can help reveal how and where funders can improve their performance, though these are harder to answer than those above. They include:

  • Are they making their yes/no decisions in the best way(s)? Almost all funders make their decisions subjectively, either by soliciting the opinions of experts about a proposal or by interviewing applicants. Research on everything from picking stocks to student admissions shows that humans show weaknesses and biases in allocating scarce resources. The role of biases in foundations’ decisions has not yet been examined (to our knowledge). One funder of academic research found that shortlisting applicants on the basis of objective criteria was a better predictor of success (according to bibliographic metrics) than interviews were.
  • Which opportunities / applications did they turn down but should have funded?

FPR awards a point to foundations which publish any analyses of their performance along the lines of those questions above.

We count / would count:

  • Analysis of which types of grant / how many grants of each type achieved their goals
    • The Shell Foundation published analysis like this when it turned 10 years old: it sorted its grants simply into ‘succeeded’, ‘did OK’, and ‘failed’. It looked at the split there in the three periods of its evolution. (see graph below.)
      • Giving Evidence produced analysis like this for a Hong-Kong-based foundation the ADM Capital Foundation.
      • (As mentioned, it’s fine if not every grant works. But a foundation should identify that something didn’t work and understand why it didn’t work, in order to learn. Perhaps it is making mistakes repeatedly simply by not realising that they are mistakes.)

We do not count simply analyses of where a funder’s grants go, eg.., by sector, geography or size of grantee. That is because that does not indicate anything about what the funder is doing well or poorly, and yields no lessons for how it can improve its practices.

Or perhaps every time they make a grant to a small organisation, it fails. That doesn’t ban it from making grants to small organisations. The Dutch have a saying “comply or explain”. In this  case, if such a funder wants to support a small organisation, it can, but should have a reason for doing so and know that it may need to handle it differently to usual to avoid another failure.}

  • Analysis of the proportion of funds given out which grantees and applicants end up spending on dealing with the foundation, e.g, writing applications or preparing reports for the foundation.

Giving Evidence has seen – and documented – instances where the application process is so cumbersome, and the funder deliberately solicits so many applications, that the ‘net contribution’ of the funder to the charity sector is very low.  For example, NatWest once ran a funding programme to which it encouraged many charities to apply and then rejected 93% of them. They all incurred costs. Aviva ran a somewhat similar funding programme in which, if charities each spent two days on the application (which is hardly difficult), would have been a net drain on the charity sector.

This analysis would not be hard. It would simply involve identifying the costs that applicants / grantees incur at each stage of the funder’s process, and the number of organisations at each stage.

  • Systematic surveys of all applicants / grantees to gather their feedback. Ideally these should be run anonymously, for obvious reasons. In Year Three of FPR, we saw several examples of these – which we will write about elsewhere.

We do not count surveys which do not appear to be systematic, or quotes / case studies about a few grantees. That is because good analysis involves the whole picture: quotes / case studies may well have been selected from just the grantees who are most positive or pliable and therefore not representative.

 

  • Analysis of performance of grantees (ie., successful applications) vs. rejected applications. We have seen this for one funder but sadly unpublished.This was in academic research, where success of projects – at least, on bibliographic metrics – always eventually becomes visible and is comparable across projects. This funder compared the success of work it funded with that of work which it rejected but which did eventually get funded somewhere. It found… no difference. In other words, its selection process was no better than random, and/or it was adding no value to its grantees. That is worth knowing.
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.