Our Sampling Method

This article is written by Caroline Fiennes, Giving Evidence 

Why does the Foundation Practice Rating not rate the same foundations every year?

The Foundation Practice Rating rates 100 UK charitable grant-making foundations each year on their practices on diversity, accountability and transparency. The set of foundations which we research and rate changes from each year. A couple of people have asked recently why we do that and whether that compromises the FPR’s rigour. This article is to explain.

Our sample

To be clear, the set of 100 ‘included foundations’, as we call them, each year is as follows[1]:  

  1.       The five largest charitable grant-making foundations by giving budget. 
  2.       All the foundations which fund the Foundation Practice Rating. (There are currently 13 of them. One is not a charity: the Joseph Rowntree Reform Trust.)
  3.       A random sample of: community foundations across the UK (as listed by UK Community Foundations, the membership body of community foundations), and the ~300 largest foundations in the UK (as listed in the ACF’s annual Foundations Giving Trends report).

The sample is organised so that it is stratified, i.e., a fifth is from the top quintile in terms of giving budget, a fifth from the second quintile etc. So, for example, if no foundation funding the FPR is in the 2nd quintile, then all 20 included foundations in that quintile would be chosen randomly; whereas if three foundations funding the FPR are in the 2nd quintile, then 17 foundations in that quintile are chosen randomly. Obviously, at least five ‘slots’ in the top quintile are filled non-randomly (by the five largest foundations), and some other ‘slots’ are filled by foundations funding the FPR, so in the top quintile, not all the ‘slots’ are filled randomly. The foundations funding the FPR vary considerably in size: they are not all at the top.

We re-make the sample each year. The FPR is not a panel study: we do not track the same participants over time. This is intentional.

Notice that our sample is 100 out of about 340 foundations*. Thus we include ~29% of the total set.(*Those are: the ~300 on the list in the ACF report, + about 35 community foundations, + a couple of foundations which fund FPR which are in neither of those.) 

Why do we change the sample each year?

Well, on the first part of our sample, the five largest foundations change around: in the three years that we have been doing this, eight foundations have appeared in the largest five at some point. Looking at the chart below, it would seem rather bizarre to continue to rate, say, BBC Children in Need – now the 11th largest foundation – just because it was in the largest five when the FPR happened to start. We always include the (then) five largest foundations because their practices dominate grant-seekers’ experiences, so it is important to reflect which foundations those large ones are at the time.

 

 

On the second part of our sample, the set of foundations funding FPR changes: in the first year, there were only 10 and now there are 13.

On the third part of our sample, the rationale is this. First, we are trying to get a representative picture of progress across the whole foundation sector. And second, part of the ‘intervention’ of FPR is foundations knowing that they might be included at any time. If some foundations knew that they would definitely be included, they would have an incentive to improve their practices in order to improve their grades, but other foundations would not feel that incentive so might not improve, or at least, not make so much effort to improve. Thus the random selection enables FPR to have more influence than if it were a panel study: and our primary goal is to influence practice.

These two reasons interact. If FPR were a panel study, quite probably the foundations included would improve more than those who are not, and we would gain zero information about the set which are not included. They might well diverge over time. We therefore would not get a sense of the sector as a whole.

Given that the sample changes, how can FPR make year-on-year comparisons?

The technique of studying a randomly-selected subset of relevant entities is used in many surveys of public opinion, including consumer confidence and voting intention. Typically, those survey 1000 randomly-chosen adults from across the country. The sample may be adjusted to make it  representative e.g., in terms of age, gender, the four nations of the UK. That is like FPR ensuring that our sample is representative in terms of foundations’ size. So, when you see news stories that voting intention has changed, those are almost certainly based on sequential studies of a small set of people, and that set is freshly-drawn each time.

Professor Stephen Fisher of Oxford University studies public opinion and was on the British Polling Council panel that investigated the 2015 UK General Election polls. He says:

“The methods that FPR uses are very sensible. Dividing foundations into five groups according to how large those foundations are, and then randomly selecting foundations within each group should ensure a broad and representative sample overall. Opinion polls aren’t perfect, but they typically get the share of the vote within a margin of error of +/- 4 percentage points. They come from sampling around 1 in every 30,000 voters. FPR is sampling 1 about in every 3.5 foundations: a much larger proportion of the total, and with much more coverage of the bigger foundations. On that basis, fluctuations in the FPR due to random differences in sampling should be very small indeed.”

Making year-on-year comparisons 

On the basis described above, it is rigorous to compare the full set of 100 foundations year-on-year. We made that comparison in the Year Two report – ie., the first year when we had a previous year. In that report, we also included comparison of:

  • The set of foundations which were included in both years 
  • The set of foundations which were randomly included in Year One with the set of foundations which were randomly included in Year Two.

In each case, we assessed the changes in overall numerical scores and numerical scores on each of the three domains (diversity, accountability and transparency), and we looked at whether those changes were statistically significant. 

We will repeat and extend those analyses in subsequent years.  

[1] Foundations can opt-in to the FPR: they can pay to be assessed. They are treated as follows. If a foundation wants to opt-in and happens to be selected randomly for inclusion, then it is treated as a normal randomly-included foundation: it does not pay and its results are included in the analysis of the main 100. By contrast, if a foundation wants to opt-in and is not selected randomly for inclusion, then it pays and is not included in the analysis of the main 100. This is to avoid selection bias in the sample.

Author: Caroline Fiennes, Giving Evidence 

 

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.