The Foundation Practice Rating enables the foundation world, to be publicly accountable for the disclosure of its activities and the diversity of its leaders.
A rating system is not a ranking, meaning foundations are not compared with each other.
As the FPR utilises standards established by a broad spectrum of stakeholders, it uncovers the elements of diversity, accountability, and transparency considered most crucial by others. This, in turn, enables foundations to channel their endeavours towards enhancing these aspects of their operations. Moreover, the FPR draws on exemplary approaches from diverse domains, including the public and business realms.
A primary objective of the FPR is to prompt foundations to explore the resources and assistance available for enhancing their effectiveness and responsibility. Furthermore, it fosters fresh conversations about foundation practices and the processes involved in making informed decisions.
How the rating system works
The assessment exclusively targets grant-making foundations registered in the UK, excluding entities replenished by government funds like research councils.
The selection of foundations subject to evaluation comprises the top 300 identified by the Association of Charitable Foundation (ACF), alongside UK community foundations. Data employed for the assessment is sourced independently from publicly available outlets.
The FPR encompasses a set of criteria against which each foundation is independently evaluated by an impartial assessor. The criteria delineate the specific requirements for meeting each standard. The evaluation process encompasses three focal points: diversity, accountability, and transparency. These criteria are informed by existing standards in other sectors, as well as input from grantees, grant seekers, and other stakeholders in this domain.
To minimize bias, two analysts independently collect information about a foundation in a ‘blind’ manner, without knowledge of each other’s findings. The assignment of foundations to analysts is random, a technique common in various funding and academic contexts. Assessment outcomes and underlying data are made publicly available annually.
How we choose each year’s cohort?
The FPR looks only at UK charitable grant-making foundations. Public grant-making agencies (such as local authorities or the research councils) are not included because they have other accountability mechanisms.
There are hundreds of charitable foundations in the UK, so a sample (cohort) must be taken each year. The FPR assesses 100 foundations, which are:
1. The foundations funding this project. The aim is not to criticise other foundations, but instead to improve the whole sector. The ‘Funders Group’ are assessed using the same criteria and process, as part of their own strategies for self-improvement.
2. The five largest foundations in the UK by grant budget. These foundations dominate UK grant-making overall, and therefore have a significant impact on the areas in which they give. The UK’s ten largest foundations give over 40 per cent of the total given by the UK’s largest 300 or so foundations. Of the five largest foundations assessed in Year One, only three qualified under this criterion in Year 2022/23.
3. A stratified random subset of other foundations. These are selected from:
a) community foundations for whom financial information is listed by UK Community Foundations; and
b) the UK’s largest foundations, as listed in the ACF’s Foundation Giving Trends 2021 report.
Those two give a list of 387 foundations. In the FPR sample, a fifth of the foundations are in the top quintile (by annual giving budget), a fifth in the second quintile, and so on.
One non-charitable grant-making foundation, the Joseph Rowntree Reform Trust, is also included because it contributes funding to the FPR. It is assessed in exactly the same way as the charitable foundations.
In response to feedback to the year 2021/22, foundations are offered the option to opt in to the FPR. Such foundations pay a small fee to cover the research work, their results are published, but they are removed from the main analysis to avoid the selection effect skewing the results.
The method behind the rating
1. Gathering the data – Data is gathered on the foundations using only publicly available information, specifically the foundations’ websites or the information provided on the relevant charity regulators’ websites (including the annual report). Each foundation is assessed twice by two independent researchers. If there are any discrepancies in the two scores these were resolved through discussion and searching for missed information. The data is then sent to the foundations to check to ensure that it is accurate.
2. Scoring the data: Not all questions are relevant to every foundation. For example, a foundation that only funds by invitation does not need to publish its eligibility criteria. When a criterion is not relevant to a foundation, they are not scored on that criterion. Allowing these ‘exemptions’ means that the maximum score within a pillar varies between foundations. A foundation’s score for each pillar is then divided by the maximum possible score for it on that pillar, giving a percentage, which is the foundation’s final score on that pillar. The full list of exemptions can be accessed here.
3. Determining the rating: The foundation’s scores are converted into a grade. There are four grades – from A (the top) to D. Using four grades is based on UK public sector rating/quality assessment systems which often have four (e.g, Ofsted’s ratings of schools, HM Inspectorate of Prisons’ system and the Care Quality Commission’s system). The FPR reports each foundation’s grade on each pillar but not the numerical scores. This is to prevent a ranking being constructed from the data. The foundations are also awarded an overall score, which is the average of the pillar scores.