Skip to content

Frequently asked questions

What happened to the old Guidebook?

The original EIF Guidebook, published in 2014, provided information about 50 programmes available in the UK. Like the current Guidebook, the old Guidebook provided evidence ratings to indicate programmes with stronger or weaker evidence of positive outcomes for children. Unlike the current Guidebook, these evidence ratings were based on a summary of assessments by 15 other international evidence clearinghouses.

Since 2016, EIF has conducted its own assessments of evidence. The current Guidebook was launched in 2017, and provides strength-of-evidence information based solely on EIF assessments.

  • Any programmes that were included in the old Guidebook and have been reassessed by EIF are included in the new Guidebook, with updated ratings and other information (if they meet the requirements for the level 2 threshold).
  • Some programmes included in the old Guidebook will be reassessed by EIF. For information about programmes that are currently going through the assessment process, see: Work in progress
  • Any programmes in the old Guidebook which have not been reassessed by EIF are not included. Information about these programmes is available on request.
Read more Read less

Why do the evidence ratings start at 2?

The EIF Guidebook includes programmes that have at least preliminary evidence of achieving positive outcomes for children – which is equivalent to level 2 on our standards of evidence. Programmes that are ‘not level 2’ (or rated NL2) are not included in the Guidebook.

Previous versions of the EIF evidence standards included lower ratings of 0 and 1, where 1 indicated that a programme has a logic model and testable features of the kind that might be used to evaluate its impact at some point in the future. In our view, this was not a helpful distinction in terms of helping commissioners and others to understand what the evidence was saying about likely or possible impact. To reflect this, we have adapted our evidence standards to exclude specific conclusions about programmes that do not meet the level 2 threshold.

In its current form, the Guidebook provides information based on evidence of impact gathered through a formal evaluation process, and does not make judgments about the adequacy or correctness of logic models and programme designs. Programmes that do not have evidence are rated NL2 (‘not level 2’), rather an 0 or 1.

For a list of NL2 programmes, see: Other programmes

Unlike the cost rating, there is no level 5.

Read more Read less

Should I only commission programmes with evidence at level 4?

There are several advantages to implementing interventions with established evidence.

  • Level 4 or 4+ programmes have evidence showing consistent benefits in multiple places with multiple populations. But this is no guarantee that an intervention will provide these benefits again in a new setting or authority, particularly if it is not delivered as intended. Established evidence does, however, increase the likelihood that it will.
  • Interventions with established evidence also tend to be more developed, meaning that the original providers have worked through various issues that could hamper effective implementation. In particular, interventions with established evidence are more likely to be able to provide assistance to set up and ‘install’ the intervention to maximise its effectiveness, for example through recruitment guidance or training materials.

However, you should never commission an intervention on the basis of its evidence rating only. Using evidence is one of three necessary parts of the good commissioning, alongside effective implementation and careful consideration of costs and benefits. More information about effective commissioning is available on our website.

Read more Read less

Should I decommission programmes with evidence at levels 2 or 3?

It would be unwise to automatically decommission a programme on the basis of disappointing evaluation findings. Demonstrating impact is a journey: many well-evidenced programmes have had evaluation setbacks in the past, and have used the lessons to strengthen their programme model.

Commissioners have a responsibility to encourage and support ongoing evaluation of early intervention activities. This is especially true for programmes rated below level 4: all may be suited to particular local needs and circumstances, and lower rated programmes can be an excellent source of innovation and experimentation, but it is important that commissioners make a commitment to monitor, test and adapt. More information about effective commissioning is available on our website.

For more information on what the evidence ratings mean for commissioning, see: How to read the Guidebook

Read more Read less

How should I use the information on child outcomes & impact?

It is important to recognise that magnitude of effect is only one piece of information that feeds into a broader commissioning decision. Moreover, interpretation of effect sizes is not a straightforward matter, involving some element of judgment and meaningful use of context.

Crucially, bigger is not always better. What is a good or large improvement index score depends on the context for your decision, including the range of programmes that are available and their differing track records of improving outcomes. There are a number of important factors to consider.

  • Cost: Effect sizes should be considered relative to cost. An effect that, on the face of it, appears to be quite small, may actually be very practically meaningful and desirable if it is achievable at a low cost. Similarly, what appears to be a larger effect might not be as meaningful if it is only achievable at a prohibitively high cost. The Guidebook currently includes a cost rating to provide a comparison of the relative cost of interventions.
  • Implementability: Effect sizes should be considered relative to how straightforward or difficult it will be to implement the programme in your situation. An effect that appears to be quite small may actually be practically meaningful and desirable if other similar interventions, perhaps producing larger effects, are prohibitively difficult to implement under normal circumstances. The Guidebook currently describes the implementation requirements of each programme, to help you make an informed judgment about this.
  • Target population: When comparing interventions on their effects and impact, it is important to be aware of their target populations. Targeted interventions tend to have larger effect sizes than universal interventions due to fact that they narrow down the population of study participants to those most likely to benefit. A larger effect under these circumstances is not necessarily more desirable and meaningful than a smaller effect that is achievable at scale for a larger number of people. Similarly, a smaller effect that is achieved with a group that is hard to influence is not necessarily less desirable and meaningful than a larger effect that is achieved with a group that is typically easier to influence.
  • The outcome itself, how difficult it is to improve, and the relative success of other interventions: Not all outcomes are equal, and an effect that appears to be quite small might actually be practically meaningful and desirable, compared to a larger effect, if it is on a more substantively important outcome. For example, if you’re comparing two programmes that ultimately aim to reduce violent crime, a smaller effect on reducing arrests may be considered more valuable than a larger effect on an intermediary outcome, like externalising behaviour problems. Similarly, an effect that appears to be quite small may actually be practically meaningful and desirable if there is no evidence of other similar interventions producing larger effects. The comparative context of what other, similar interventions can achieve is crucial.
  • How the outcome is measured: It is important to look into how the outcome was measured when weighing up what is a more meaningful effect. An example of this is where a behavioural outcome (say, smoking) is measured using self-reports on the one hand, and a biochemical test on the other. We may have reason to believe that the latter is a more reliable and trustworthy measure of this behaviour. In which case, a larger effect on the self-report should not necessarily be considered more meaningful or desirable than a smaller effect on the more objective measure.
  • The counterfactual: The estimates of impact described on the Guidebook are calculated by comparing the outcomes of the intervention group to the outcomes of a comparison group. However, in practice, the nature of the comparison group can vary from one evaluation to another, and this can have an influence on the size of the estimated impact. In some cases, the comparison group may receive no intervention or services; in others, it may receive a different intervention. Generally speaking, impacts will be larger when an intervention is compared to ‘no intervention’, and smaller when an intervention is compared to another, alternative intervention. Therefore, when comparing and contrasting impacts, it is important to be aware of the nature of the comparison group, and weigh this in your decision-making. A smaller effect when an intervention compared to alternate intervention (which may itself be effective) is not necessarily less meaningful or desirable than a larger effect when the intervention is compared to no intervention.
  • Time of measurement: The size of effects and impacts can vary depending on when outcomes are tested. It is not unusual to see effects ‘fade out’ over time – that is, to get smaller the longer after the end of the intervention they are tested. Therefore it’s important to consider this when comparing programmes on their effects and impacts. A larger effect measured immediately after the intervention isn’t necessarily more meaningful or desirable than a smaller effect measured years after the end of the intervention: this effect is more sustainable, and more difficult to achieve.

For all of these reasons, we recommend that our users don’t start with the size of impact when considering interventions. It is important to define which outcomes you wish to improve, and for whom, and also to establish what is feasible to implement. Once you’ve narrowed down the set of interventions on this basis, then it will be important to see which have a track record of producing larger effects on the outcomes that you care about.

Read more Read less

What is a good or bad improvement index score?

There is no universal answer. Without understanding your specific context, there is no such thing as a good or a bad score. It depends on what you are trying to achieve, for whom, and at what cost – and what other similar programmes have a record of achieving.

Read more Read less

Why is information on impact missing for some programmes?

There are two reasons why information on impact may be missing for certain programmes or outcomes:

  • Unavailable data: Often, insufficient information to calculate impact is reported in the original evaluation studies. In these cases, information on effects measured in their original units and/or the improvement index may be missing on the Guidebook. We will continue to endeavour to collect this information from programme providers and evaluators.
  • Strength of evidence: We have decided to only publish information on the size of improvements for programmes that receive a strength of evidence rating of at least level 3. This is because we can be confident in these cases that there was a causal relationship between participation in the programme and improvements in outcomes, and that the evidence provides unbiased and trustworthy estimates of improvement in outcomes. To confirm whether this is the reason for missing information, you can click on the study and check its rating. If the study rating is level 2, then the information is missing because we are not confident that the study provides an unbiased and trustworthy estimate of improvement. If the study rating is level 3, then the information is missing because it is unavailable in the original evaluation study.
Read more Read less

Why doesn’t the Guidebook simply report effect sizes?

We have taken the decision not to directly report effect sizes – such as Cohen’s d, or odds ratios – as we think that these are technical statistics which may not be familiar to many Guidebook users. Instead, we have converted effect sizes into a single measure of impact used across all programmes and outcomes: the improvement index. We think this has a number of advantages over directly reporting an effect size. In particular, we think improvements expressed in terms of a change in percentile rankings are more intuitive to most than those expressed in units of standard deviation (or other similar units); and that because the improvement index is limited to values between 0 and 50, we think this helps to contextualise effects and is more user-friendly than effect sizes, which can take on a greater variety of values.

However, it is possible to get a sense of how an effect size translates into an improvement index score. For example, a Cohen’s d of 0.5 is roughly equal to an improvement index score of 19; a Cohen’s d of 1 roughly equals 34; 2 roughly equals 48; and 3 roughly equals 50.

Read more Read less

Are all Guidebook programmes available in the UK?

No. The Guidebook includes programmes that have not been implemented in the UK, so far as we’re aware. It also includes programmes that are available in the UK but for which their best evidence is based on evaluations conducted in other countries, often the US or Australia. However, all of the interventions included in the Guidebook may be feasibly implemented in the UK and have evidence that is relevant in the UK context.

To limit your search to programmes that have been implemented in the UK, tick the ‘Show only…’ option in the filter options.

Read more Read less

I can’t find the programme I’m looking for – does that mean it doesn’t work?

The Guidebook is not an exhaustive list of all programmes available in the UK. If you can’t find a particular programme in the Guidebook:

  • The programme may have been assessed by EIF and found not to meet the threshold for a level 2 rating. These programmes are listed on: Other programmes
  • The programme may not have been assessed by EIF. For information about programmes that are currently going through the assessment process: Work in progress

Other programmes may be included in future assessment reviews. For more information, see: Getting your programme assessed

Read more Read less

Why is your cost rating nothing like the service price that I’m being charged?

Our cost rating is not the same as the market price of an intervention, which will be negotiated and agreed commercially between providers and commissioners. The cost rating is an assessment by EIF of the relative input costs of early intervention programmes, such as practitioners’ and supervisors’ time, qualifications or training requirements. This assessment is based on information that programme providers have supplied about the components and requirements of their programme, and not on any price list or advertisement.

Read more Read less

What is a Spotlight set?

Spotlight sets are groups of programmes that share a common approach, theme or outcome, and which we think are interesting and relevant to current debates or areas of practice. Spotlight sets are created and curated by EIF, and only include programmes that have already been assessed and included in the Guidebook.

Read more Read less

Do you only assess programmes?

The Guidebook provides information only about programmes – manualised, repeatable activities and models of early intervention delivery. We know that early intervention works in other ways too – from a teacher, police officer or other frontline professional spotting the first signs of risk in a child or family, through to wholesale systemic changes across the many agencies and services at work in a local area. Information and guidance about these other forms of early intervention can be found on our website.

Read more Read less

Do you evaluate programmes?

EIF does not conduct evaluations. Evaluations may be carried out by a range of organisations, including universities, professional research companies and major national auditors. Where possible, the Guidebook provides links to key evaluation studies, which include details about who conducted the evaluations and their methods.

Read more Read less

How do you select programmes to be included in the Guidebook?

EIF conducts programme assessments as part of our ‘What works’ reviews of evidence related to early intervention in specific issues, age groups or other populations. From time to time, we invite providers to submit programmes for assessment, via an open call. We do not conduct assessments on a one-off or on-request basis, outside of these review periods.

For more information, see: Getting your programme assessed

Read more Read less

How do I get my programme included in the Guidebook?

For new, upcoming evidence reviews, EIF issues a call for expressions of interest for programmes that fit within the scope of the review and wish to be considered for assessment. These opportunities are notified via our website, Twitter and newsletter.

We do not conduct assessments on a one-off or on-request basis, outside of our ‘What works’ reviews or open calls to providers.

For more information, see: Getting your programme assessed

Read more Read less

How often will you be adding programmes to the Guidebook?

Programmes that are assessed as part of current or future reviews, and which are found to have at least preliminary evidence of achieving positive outcomes for children, will be added to the Guidebook at the conclusion of each review, as and when individual programme reports are complete.

For more information on current assessment reviews, see: Work in progress

Read more Read less

Will EIF reassess my programme, if a new study has been published, and could that change the evidence rating?

Periodically, EIF updates existing Guidebook entries to ensure that the information is current and that our evidence ratings incorporate all newly published and relevant evidence.

Typically, a programme will be selected for this kind of evidence update (which we sometimes refer to as "maintenance") on the basis of EIF having identified impact studies which have been published since EIF’s last review of the programme. As with other assessment activity, EIF does not conduct evidence updates on a one-off or on-request basis, and this maintenance work will take place during planned periods of time dedicated to updating the EIF Guidebook, which we currently anticipate will occur every 12–24 months.

During an evidence update, the standard programme assessment process is run for the new study only, and a decision will be taken as to whether the new study affects the outcome of the initial programme assessment process. It is possible that a programme’s evidence rating will change on the basis of this new evidence: it may go up or down, depending on the quality and findings of the new study. We may also add to the list of child outcomes which a programme has been shown to achieve.

You may notify EIF of a newly published study and express your interest in an evidence update at [email protected]. However, expressing interest is no guarantee that EIF will be able to update your programme’s assessment in the short term, nor that EIF will prioritise updating your programme over other programmes when evidence update activity takes place.

Read more Read less

Do you assist local authorities and commissioners to implement or evaluate programmes?

EIF is not generally able to provide bespoke guidance to local areas about using evidence, implementing programmes or evaluating the effect of decisions and changes. We do work with local areas who commission EIF to help them to understand the research and evidence assessments that we have previously conducted and published.

We also work more closely with members of our Places Network to support early intervention in their areas. 

In addition to the guidance resources available on our website, we also run a series of seminars and masterclasses around the country: keep an eye on our website, Twitter and newsletter for more information.

Read more Read less

Can I use the EIF evidence standards to assess a programme?

We arrive at our strength of evidence ratings, measured against our evidence standards, through a detailed consideration of all significant evidence against 33 criteria, covering design, sample, measurement, analysis and impact. These evidence assessment criteria are intended to be applied by individuals who have been extensively trained in EIF programme assessment procedures. This process and our ratings are then subjected to rigorous quality assurance with independent experts. In our view, it is not possible to replicate this process externally.

The Early Intervention Foundation does not recognise, endorse or accept any liability arising from attempts to replicate our assessment processes or apply our standards by external organisations.

Read more Read less

Submit a new FAQ

If you do not find the answer you are looking for, please submit your question via the email to [email protected]. If it is asked frequently, we will add it to this page.

Published April 2024