YOUTH EMPLOYMENT EVALUATION TOOLKIT

Your leverage to better youth employment projects

Authors: Monika Bartosiewicz-Niziołek, Sławomir Nałęcz, Zofia Penza-Gabler, Ewa Pintera

 

YOUTH EMPLOYMENT EVALUATION TOOLKIT

 

INTRODUCTION

The purpose of this toolkit is to present practical tools supporting the evaluation of projects aimed at increasing the employment of young people (aged 15-24), including those who face difficulties in the transition between school education and work (NEETS).

The main recipients of this toolkit are NGOs and other entities which want to analyse their projects in the abovementioned area. Such evaluation may be aimed at:

  • Measurement of the project’s effectiveness in achieving project goals and results (outputs, outcomes),
  • Assessment of the usefulness of the project for its beneficiaries / participants and the sustainability of the achieved results,
  • Better adaptation of the project to the needs of its beneficiaries and the labour market,
  • Examination of the project impact on a wider group of people who did not participate directly in it (e.g. families, friends of the project beneficiaries),
  • Assessment of project efficiency in terms of resources engaged in the project and its effects.

This toolbox is a supplementary material to the course “Towards better youth employment projects – learning course on evaluation”, available HERE. While during the course you can get knowledge and training in evaluation adjusted to your needs (basic or advanced level), the toolbox provides some universal knowledge matched with practical instructions, tools and examples designed to develop evaluation skills and to support you in using the knowledge acquired during the distant course. This is achieved, among others, by question sets, tables and tool templates facilitating the design and planning of evaluation, gathering the necessary information, and then formulating conclusions and recommendations aimed at improving the projects carried out by your organisation.

The toolbox has been developed by the Jerzy Regulski Foundation in Support of Local Democracy in Poland, in cooperation with the Research Institute for Innovative and Preventive Job Design (FIAP e.V., Germany), Channel Crossings (Czech Republic), and PEDAL Consulting (Slovakia), within the framework of the Youth Impact project, financed by the EEA Financial Mechanism and the Norwegian Financial Mechanism. The project seeks to provide tools and services to improve the ability and capacity of Youth Employment and Entrepreneurship Support Actions implementers to efficiently evaluate the impact of their activities. The action will be carried out in the years 2019-2022.

The activities in this project are aimed at developing the evaluation competences of entities that support employment and entrepreneurship of young people.

GLOSSARY OF PROJECT TERMS

 

Glossary Of Project Terms

 

Activity (of the evaluated project) – actions aimed at a specific target group, which contribute to the achievement of the planned outputs and outcomes, and then to the achievement of the project objectives.

Example: Training 20 young mothers (who were unemployed at the start of the project and had to be supported by social welfare benefits) in dyeing fabrics in town X.

 

Generalisation – referring the findings obtained in the study of the sample to the entire population (i.e. also to units that have not participated in this research). Based on the results of the sample, we conclude – with a given level of probability – that the findings (characteristics / opinions) for the entire population are similar.

 

Impact – the effects of activities, outputs and outcomes of the project, contributing in the long term (apart from possible other projects / interventions and factors) to changes affecting a wider community than the direct recipients of this project.

Example: Improving the living conditions of children raised by women who found a job thanks to the professional competences acquired in the project.

 

Impact indicator – informs about the delayed effects of the project that go beyond its immediate recipients. These effects usually cover the social environment / community of the project beneficiaries and may result from the accumulation of various factors (including non-project activities).

Example: The percentage of project beneficiaries whose household did not have to be supported by social welfare benefits 18 months after the end of the project.

 

Logic matrix of the project – a table used to determine the methodology of measuring selected project elements such as output, outcome or impact. The matrix defines the indicators by which a given element will be measured, the measurement method, and assumptions / conditions of achieving the project’s effects (see chapter 2.1).

 

Logic model of change – a comprehensive tool for project planning and subsequent management of its implementation. It depicts the logic of intervention linking the individual elements of the project with cause-and-effect ties (see chapter 2.1).

 

Monitoring – ongoing collection, analysis and documentation of information during the project implementation concerning the progress of its implementation in relation to the planned schedule of activities and a budget.

 

NEET (not in employment, education or training) – the name of the group, mainly young people, who remain outside the sphere of employment and education, i.e. people who do not study, work or prepare to practice, due to various reasons (discouragement, life crisis, disability, parental or family responsibilities).

Objective (general) – expected state or effects of activities conducted within a project, planned to be achieved within a specified time.

Example: Increasing employment by 2022 among young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X.

 

On the way to achieving the general objective you can have specific objectives (purposes). A specific objective is a planned state that will be achieved as a result of the implementation of certain activities. It should be consistent with the general objective and contribute to its achievement.

Example: Increasing by the end of 2021 the professional competences of young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X to the level expected by employers in this town.

 

Outcome – direct and immediate effects / changes that refer to the beneficiaries as a result of the implementation of specific project activities.

Example: The growth of project beneficiaries’ competences related to dyeing fabrics.

 

Outcome indicator – informs about the degree of the achieved changes related to the project beneficiaries as a result of their participation in project activities and the use of outputs produced at a particular stage of project implementation.

Example: The number of beneficiaries who have acquired the professional skills of dyeing fabrics.

 

Output – a short-term effect of a particular activity in a material form (of a countable nature), e.g. a thing, an object, an event (of service delivery). These may be goods or services transferred to the project recipients, which are to contribute to the achievement of the planned outcomes.

Example: Training materials, certificates confirming the acquisition of professional qualifications in the field of dyeing fabrics by project beneficiaries.

 

Output indicator – informs about the implementation of activities that resulted in measurable products.

Examples: The number of issued certificates confirming the acquisition of specific professional competences, the number of people who have achieved a certain level of these competences, an increase in the level of social competences according to the selected test, the number of cover letters and CVs prepared by the training participants, the number of textbooks prepared.

 

Population – the group of individuals (e.g. specific people, organisations, companies, schools, institutions) that are the subject of the researcher’s research / object of interest.

 

Project (intervention) – a set of activities aimed at producing the intended outputs and outcomes, which, when used by the project’s target group, should bring the planned objectives and impact.

 

Representative sample is a sample that well reflects / represents the studied population and makes it possible to accurately estimate its features through generalisation.

 

Sample selection – selecting from the population cases that will form the sample (smaller part of the population). It is conducted in a specific way (random or non-random) based on the sampling frame, i.e. a compilation (list) of all units forming the population from which the sample is drawn.

I. THE BENEFITS OF EVALUATION

 

The Benefits Of Evaluation

 

There are many ways to understand evaluation. According to the approach applied in the Youth Impact project, the main goal of evaluation is to value the project effects in order to improve them. This assessment is based on evidence that is collected by using social sciences methodology with regard to the change caused by the project.

Our approach largely refers to impact evaluation in its broad sense (later we use the term impact-focused evaluation to underline that we want to embrace not only experimental and quasi-experimental designs). It is an evidence-based reflection on the real (net) effects of a project. It allows you to understand the factors influencing the ongoing and delayed changes and focus on the sustainability of the achieved outcomes as well as the impact of the project that goes beyond its direct participants. This approach to evaluation allows for the formulation of recommendations supporting project management, which contribute to the effective and efficient implementation of its objectives, as well as the organisation’s mission.

Our approach is also a participatory one, taking special care about the needs of various stakeholders and engaging them in planning and other stages of the evaluation.

Such an approach to evaluation makes it possible to determine the value of a particular project and to understand the reasons for its successes and failures. It is also a good management tool for organisations focused on social mission and other “learning” institutions.

 

BENEFITS OF AN EVALUATION DONE WELL:

  • It allows you to predict difficulties before the start of your project (ex-ante evaluation) or notice problems at every stage of its implementation (ongoing or mid-term evaluation), and also allows you to plan actions minimizing identified risks.
  • It gives advice on how to improve an ongoing or completed project to better meet the needs of its recipients, achieve more useful and durable outcomes, have a wider impact and fulfil the planned objectives using fewer resources.
  • It allows you to assess to what extent the expected effects of the project were really caused by the project activities*. Moreover, it makes it easier to decide whether a particular project is worth repeating, disseminating, or could be adapted to a different target group.
  • It increases the motivation of employees – involving the project team in evaluation (especially at the design stage and discussing the evaluation findings), increases the sense of agency, emphasises the relationship between the work performed and the planned goals, the organisation’s mission and employees’ own values.
  • It increases the competences of employees – from issues related to project management to knowledge of the mechanisms of the changes caused by this project.
  • It increases the level of confidence and cooperation with project partners (also in future projects), thanks to taking into account the perspective of external stakeholders.
  • It makes it possible to demonstrate the achieved results and improves cooperation with grant-giving institutions and sponsors, encouraging them to finance subsequent projects.
Example: When applying for a grant or justifying the need for a project, you can quote the evaluation findings concerning a previous, similar project. Providing reliable data may help you convince funders that your project is worth funding.
  • It serves to promote your organisation.
Example: Evaluation findings, including case studies, can be used on social media to promote the organisation’s activities. These could be stories of young people who, thanks to your support, acquired new competences and then found a satisfying job or successfully run their own business.

 

Overall, evaluation has many benefits. Introducing it to everyday work can be a very useful support for managing an organisation – strengthening credibility and improving its image, educating and motivating staff, raising funds by showing evidence of project impact, and above all, the effective fulfilment of the assigned mission.


* This possibility is provided by impact evaluation, which is described in chapter 2.4.

II. PREPARING FOR THE EVALUATION

 

“You can’t do “good” evaluation if you have a poorly planned program”. (Beverly Anderson Parsons,1999)

 

Preparing For The Evaluation

 

2.1. What you need to know about the project to plan its evaluation?

In the toolkit, we concentrate on impact-focused evaluation. We present practical ways of conducting such evaluation regarding primarily the effects of project activities in terms of an intended change. The subject of our interest are the effects of project activities (outputs, outcomes, impact) and their compliance with the project theory of change (or project theory). The project theory defines the concept of the intended change and plan of the project, including its objectives, activities, expected outputs, outcomes and impact, as well as the way in which they will be measured, and what resources are needed to achieve these effects.

The basic element of the project theory is the logic model of change that compiles information on what the project running organisation needs to accumulate (inputs / resources), the work it needs to do (project activities), and the effects it intends to achieve. The logic model of change for a given project is developed according to the following scheme.

 

Diagram 1: Basic logic model of change

The methods of measuring the project outcomes and the related assumptions are sometimes specified in a separate table called the project logic matrix. The logic model and logic matrix should be part of the project documentation.

In practice, it happens that the logic matrix or even the logic model of change have not been developed or are very selective. A lack of assumptions indicating how you define the success of the project makes it impossible to evaluate it and thus verify whether the planned change took place as well as whether it occurred as a result of the project activities.

 

What to do if there is no logic model of change in the project documentation?

In such a situation, it is necessary to recreate the logic of change behind the project, e.g. based on interviews with the management and project staff, as well as already existing documents such as strategy / project implementation plan, justification for its implementation, application for co-financing, partnership agreement, etc. The following table may help you to reconstruct the logic of the project.

Tool 1: (Re)Construction Of Project Logic

The above tabulation of the logic of the project allows you to reflect on the ways of demonstrating the level of achieved effects (outputs, outcomes and impact). This goal is served by defining the indicators by which you will measure the progress of the project. An indicator is an observable attribute (feature) that enables the phenomenon to be measured. Each indicator has a measure (quantitative or qualitative) which informs about the degree / intensity of the occurrence of this phenomenon. In order to measure the change that has occurred as a result of the project implementation, you should determine the values (level) ​​of a given indicator at the beginning and at the end of the project, i.e. the baseline value and the final value. It is also good to know what the minimum required value of the final output indicator is, if such a value was defined at the beginning of the project. More information on indicators can be found in the online course (Module 3).

Tool 2: Table Of Indicators Of Project Effects

 

More examples of targeting indicators in youth employment projects can be found in the Guide on Measuring Decent Jobs for Youth. Monitoring, evaluation and learning in labour market programmes, NOTE 3. ESTABLISHING A MONITORING SYSTEM, p.6-9.

 

2.2. When to start developing an evaluation concept and plan?

It is worth developing the concept of evaluation before starting the project or even during its planning, because it allows you:

  • To initiate an in-depth reflection on the logic and coherence of project activities, their translation into project objectives, as well as factors facilitating and hindering their achievement;
  • To plan in advance the collection of information (data) that enables evaluation questions to be answered (e.g. without the baseline measurement of the level of knowledge and skills of the recipients of the training (before this activity), it will be impossible to reliably demonstrate the change that has been obtained, i.e. an increase in competences, which should take place as a result of this training);
  • To find appropriate funds to conduct the evaluation and to enter into the schedule of project activities that will help to collect relevant data, analyse them and report them;
  • To plan the collection of information in the most efficient way (the cheapest, fastest, easiest) during or after the implementation of project activities.

It is worth remembering that evaluation is a multi-stage process that must be designed and planned well, and then implemented step by step.

Stages of the evaluation process

  1. Diagnosis of evaluation needs
  2. Conceptualisation and planning
  3. Information collection – research implementation
  4. Data analysis and inference
  5. Reporting
  6. Using evaluation results – implementation of recommendations

 

2.3. How to diagnose the evaluation needs of the project stakeholders

Conceptualization and planning of evaluation should not start without identifying who needs the information, conclusions and recommendations from the evaluation and for what purpose. It is good to begin the diagnosis of evaluation needs with the stakeholders of the project to be evaluated.

Project stakeholders are people / entities (institutions, organisations) involved in various ways in the implementation of a particular project, e.g. its beneficiaries, project team, staff implementing project activities (e.g. trainers, psychologists, career advisors), project partners (cooperating organisations or institutions), sponsors / funders, etc.

The participation of project stakeholders in the evaluation is very important as they are potential allies of the evaluator. They can support the entire evaluation process, including the implementation of recommendations that improve the project. Thanks to the involvement of various stakeholders in the evaluation activities, it is possible not only to improve communication and cooperation with partners, beneficiaries and project staff, but also to convince funders to invest in the project currently being implemented or its next edition. If the stakeholders are interested in the project evaluation then conducting the evaluation in a participatory manner – involving the stakeholders in the entire evaluation process, starting with the diagnosis of evaluation needs – should be much easier.

The best way to diagnose evaluation needs while ensuring a high level of stakeholder participation is to conduct a workshop / group interview with representatives of all entities (organisations, institutions) and groups of people involved in a particular project.

If the recipients of the project are young people (e.g. NEETs) or other group who may have concerns about expressing their opinions in public, you should first hold a separate meeting with these beneficiaries and then invite their representatives to participate in a workshop with other stakeholders. This type of workshop with young people or other project recipients with a relatively weak social position should be based on values strengthening the subjectivity of the project beneficiaries (see the example from Participatory evaluation with young people, p. 7-8).

 

Example Of Workshop With Stakeholders

 

The information gathered during the workshop with the participation of stakeholders should be used to prepare the evaluation concept and plan (see chapter 2.4). Therefore, it is worth summarising the key findings of the diagnosis of stakeholder needs in the two tables below.

Tool 3: Summary Of The Diagnosis Of The Project Stakeholders’ Evaluation Needs

 

Information on the expectations of individual stakeholders regarding the form of presentation and ways of using evaluation results will be useful in the planning phase of their dissemination (see Chapter 6.3.)

 

2.4.How to design and plan the evaluation

The information collected during the workshop with the stakeholders will be used to prepare the concept and plan of the evaluation. The concept of evaluation, i.e. an idea on how to carry it out, can be prepared in 3 steps.

Diagram 2: Evaluation Concept

The first and second steps include the following:

  • Subject of evaluation – what do you want to evaluate (e.g. which project or programme),
  • Scope of evaluation – what part of the project will be included in the evaluation, e.g. the entire project or selected elements – particular activities, effects,
  • Purpose(s) of the evaluation – what are you conducting it for, what will you use the evaluation findings for,
  • Type of evaluation – at what stage of the project implementation will you conduct the evaluation; before the commencement of project activities (ex-ante evaluation), during their implementation (mid-term or on-going evaluation), after completing the project (ex-post evaluation),
  • Evaluation criteria – features indicating in what respect the project is being evaluated (e.g. relevance, effectiveness, efficiency, utility, impact, sustainability),
  • Evaluation questions – generally formulated questions regarding issues that are important in terms of assessing the value and quality of the evaluated project,
  • Evaluator – who will perform the evaluation, e.g. a team implementing the project (self-evaluation), an evaluation specialist employed by the organisation implementing the project (internal evaluation) or an external entity contracted by it (external evaluation)*.

*The strengths and weaknesses of different types of evaluation selected due to the location of the evaluator are discussed in the online course (Module 2).

You can present this information in a table showing your evaluation concept. An example of such a table and its application to a specific project are presented below.

Tool 4: Evaluation Concept Table

 

The third stage of developing an evaluation concept requires knowledge of the various research methods and tools presented in Chapter III – Data Collection. For this reason, part of the evaluation planning related to the methodology of collecting data for evaluation is presented in Section 3.3 (an example of this stage of evaluation design is presented in Tool 6).

Information on the availability of the necessary data, as well as the possibility of obtaining support from respective stakeholders, will be used when planning the evaluation process and estimating the resources necessary to carry it out. The evaluation plan should include such elements as: its schedule (with respective stages), resources necessary to conduct the evaluation (human, time, financial, information), as well as the planned form(s) of the evaluation report.

You can present this information in an evaluation planning table. An example of such a table together with how it is applied to a specific project is presented below.

Tool 5: Evaluation Planning Table

 

As you can see in the table above, information is one of the key assets that must be provided to conduct evaluation and there are plenty of data sources which can be useful for this purpose. In the context of youth employment projects one of the most important areas of the progress intended in the projects are general and vocational competences. The default source of the information on the initial and final level of such skills among project beneficiaries should be the trainers of these competences. Therefore, you should cooperate with the trainers on gathering and using the data concerning the competences level before and after the training.The measurement should use multilateral perspectives on the skills of trainees (the trainer’s perspective, self-assessment of the trainee and psychometric test) and be coherent and relevant to the content of the training. You can find an example of such tools sets in the attachments of this Toolkit. It concerns 8 key competences “needed for personal fulfilment and development, active citizenship, social inclusion and employment” mentioned in Recommendation 2006/962/EC of the European Parliament and of the Council on key competences for lifelong learning*.


*The Recommendation 2006/962/EC of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning refers to the following skills: 1) communication in the mother tongue, 2) communication in foreign languages, 3) mathematical competence and basic competences in science and technology, 4) digital competence; 5) learning to learn, 6) social and civic competences, 7) sense of initiative and entrepreneurship, 8) cultural awareness and expression.

 

2.5. How to design impact evaluation

The key distinguishing feature of impact evaluation is the fact that the assessment of project effects takes into account not only the impact of activities carried out in the project and the outputs produced but also the influence of external (non-project) factors. To evaluate the real (net) impact of the project it is necessary to plan and conduct the evaluation in a way that makes it possible to determine if the implementation of the project caused the intended exchange, and to what extent it was influenced by non-project factors.

Conducting an impact evaluation allows you to collect various types of information that are very useful for project development:

1) data on the actual impact of the project on achieving the expected change is the key information for deciding whether to repeat, duplicate, improve or discontinue the project because:

a) non-project factors could have contributed to the change intended in the project, so that the (net) impact of the evaluated project may be lower than indicated by the difference between the final value of the outcome indicator and its baseline value (measurement at the beginning of the project).

b) external factors could counteract the change expected in the project, so that the (net) impact of the evaluated project may be greater than the difference between the final and the baseline value of the outcome indicator,

2) information on the diversity and mechanisms of the impact of individual elements of the project on achieving the expected change is very helpful in improving the project,

3) identifying major external factors and the mechanisms of their impact on the intended change can be used to modify project activities so that they better concur with the processes supporting the change and better cope with opposing factors.

Depending on which of these issues is a priority in the evaluation of a particular project, but also depending on the feasibility of obtaining relevant data, different models (design) of impact evaluation are used along with data collection methods adapted to them.

Table 1. Different design approaches for impact evaluation.

Source: Emily Woodhouse, Emiel de Lange, Eleanor J Milner-Gulland. Evaluating the impacts of conservation interventions on human wellbeing: Guidance for practitioners.

Experimental and quasi-experimental evaluation designs are used to determine what portion of the intended change in a project can be attributed to the project activities (net impact). The measure of the impact of project activities is the difference between the measurement of the indicator before and after the end of the project in the group of its recipients (change in the test group, participating in project activities) after adjusting it for the impact of non-project factors. The impact of non-project factors is estimated on the basis of measuring the change of outcome indicator in a group of people who did not participate in the project and are as similar as possible to the project recipients.

  • In experimental designs (called also RCT – Random Controlled Trials), people are randomly assigned to the group of beneficiaries of the project (test group) or to the group not covered by the project (control group). Random selection to both groups helps to ensure that the two groups do not differ from each other*. Thus, changes in the measured indicators in the control group can be attributed only to external factors, and in the test group – to the combined influence of external factors and the project’s activities.
  • In quasi-experimental designs, there is no random selection of groups. For a test group which took part in the evaluated project, a control group is selected using non-random methods, but still providing it is as similar to the test group as is possible and performs similar functions as the control group in experimental models.

*When the test group or control group is small, structured random selection should be used (instead of simple random selection) to make sure that the two groups have similar structure according to features which can affect the intended outcome of the project (e.g. the structure of educational attainment level should be similar in the control and test groups otherwise the more educated group can make better progress in achieving skills which are to be developed in the project under evaluation).

 

In order to apply experimental or quasi-experimental designs, evaluation activities must be coordinated with the evaluated project activities and therefore need to be planned before they are implemented. For example, when you expect surplus candidates for project beneficiaries or when the project will be implemented in several editions and you can organise joint recruitment, you can do the group assignment using random sampling. This way you can get a randomly selected test group (to be immediately involved in the project activities) and the control group (the people not selected for the current edition of the project). Just after selecting the groups the baseline measurement should be conducted (and final measurement in both groups after the project has been completed).

If the beneficiaries of your project are chosen by an external institution (e.g. Labour Office), it is also worth checking what selection procedure is used there. If this procedure gives the opportunity to select a control group or comparison group in which the project outcome indicator can be measured, verify it and plan the measurement in this group at more or less the same time as it is carried out in the evaluated project.

An important aspect of impact evaluation is the control of what is known as the spillover effect, which is the spread of the impact of project activities outside the test group, in particular to people in the control or comparison group. The risk of the spillover effect is greater the more contact the recipients of the evaluated project have with the people from the control or comparative group. Another aspect influencing the scope of the spillover effect is the level of demand on solutions provided by the evaluated project.

Planning and interpreting an impact-focused evaluation requires the use of the project theory to examine the consistency of the evaluation findings with the project logic (of change) and to verify the impact of the alternative factors. Examining the consistency of facts with the project logic focuses on identifying evidence confirming a cause-and-effect relationship as well as data, which confirm these relationships. In this approach, it is crucial to plan as early as possible what kind of data should be collected during the project in order to verify:

– the cause-and-effect relationship between activities, outputs, intermediate and final effects (outcomes, impacts) that make up the project logic of change,

– achievement of successive stages in the cause-effect chain of intermediate effects leading to the outcomes measured by the final indicator (the milestones).

Assessment of the impact of alternative factors is based on similar planning and verification of factors of change other than project activities expected as the results of the project.

If the evaluated project is a part of a larger programme carried out in different locations or by different organisations, this may provide an opportunity to obtain comparative data that will be used in the impact evaluation based on case study analyses. To use the case-based evaluation design, you should collect information not only about the outcome indicator that you measure in the evaluated project, but also about all important factors that may affect the value of this indicator. The set of such factors should be determined on the basis of the project theory, taking into account the different elements which may influence the intended change.

It is worth remembering that in this model it is possible to use information about projects implemented in the past. Regardless of where the analysed cases come from, it is important to obtain a predetermined set of information from them. The final analysis is based on a table that summarises the data from all analysed cases concerning the occurrence of the factors that may affect the intended change of the outcome indicator and, of course, the outcome indicator itself.

Table for summing up the findings from case studies analysis – practical example

In the table above you can see summarised information on 4 cases where the outcome (having a job or being in education or training 1 year after the project completion) was monitored against three factors. Two of them were different project stimuli (extensive training in social competences and vocational training) while the third one was external – supported employment for six months right after the end of the project). The analysis showed that it was the extensive training in social competences which caused the intended outcome.

 

Participatory design is an underrated but popular model of impact-focused evaluation. It does not guarantee as much reliability and precision as experimental or quasi-experimental designs, nor is it as convincing as a strict case study analysis but it can still be useful, especially in small projects. In participatory design, you refer to the perceptions of the participants in the evaluated project and, on the basis of the data obtained from them, you evaluate the impact of the project. Thus, the methodology of collecting data is of great importance because the project beneficiaries tend to adjust their opinions to what they think the researcher might want to hear, especially if data collection is conducted by someone from the project staff.

  • One of the participatory evaluation designs is called Reflexive counterfactuals. Its advantage is that it can be used after the end of the project. On the other hand, it is exposed to the previously described risks, such as influence from the researcher. As part of reflexive counterfactuals, the beneficiaries are asked to compare their current situation with their situation before they participated in the project and to describe what has changed for better and for worse. Then, they rate the relevant importance of particular benefits and costs to select the ones which were considered to be the most important. Using different research techniques, it is also possible to ask about the causes of particular changes and find out which of them were associated with the project.
  • Another technique for participatory impact analysis is MSC (Most Significant Changes). It is based on the generation and in-depth analysis of the most significant stories of change in the lives of project beneficiaries. These stories of change were observed and noted by various project stakeholders (including the beneficiaries themselves). The properties of this research technique allow it to be used after the end of the project.

Finally, the possibility of conducting an impact evaluation based on statistical methods should also be mentioned. The basis here is the analysis of the correlation (coexistence) of the outcome indicator and the activities undertaken in the evaluated project*. Such analyses are performed on large data sets, which makes this type of evaluation of little use for organisations running projects for a relatively small group of recipients**.

More information on impact-focused evaluation can be found in the online course (Module 3).


* In such analyses, the basic method of analysis is regression, in which the strength of the relationship between the result indicator and the indicators of actions carried out within the evaluated project is examined, with statistical control (exclusion) of the impact of confounding factors.
** The problem that hinders the use of statistical methods of impact evaluation by small and medium-sized organisations is, apart from the scale of the projects, the need to use advanced statistical software and qualified analysts.

III. DATA COLLECTION

 

Data Collection

 

3.1. What are the major types of evaluation research methods?

In order to estimate the value and quality of the project in relation to the chosen criteria and answer the evaluation questions, you should correctly collect the necessary information. Research methods and tools serve this purpose. Research methods mean a specific way of collecting information – qualitative or quantitative – with the use of specially developed tools, such as interview scenarios, observation sheets or questionnaires. Let’s look the differences between these methods and research tools.

Qualitative methods enable the collection of data in an in-depth and flexible manner, but they do not allow you to assess the scale of the studied phenomena as these methods cover only a small number of people from the groups involved in the project (e.g. selected recipients). On the contrary, quantitative methods are used in the case of large groups that consist of several dozen people. In the case of more numerous groups (e.g. more than 400-500 people) these methods enable the generalisation of conclusions drawn from the survey of a representative, randomly selected sample of people for the entire population, i.e. the community that is of interest to the researcher, including people who did not participate directly in the particular study. This generalisation must be carried out in a specific way that will ensure that the sample of people subjected to the study is representative, i.e. maximum similarity in various socio-demographic characteristics to the population from which they were selected.

Comparison Of Qualitative And Quantitative Methods Of Evaluation Research

 

Both of these types of methods have some strengths and weaknesses, therefore you should always use both qualitative and quantitative methods in the evaluation study. This approach is in line with the triangulation principle aimed at ensuring the high quality of the information collected. Triangulation means using various sources of information, types of collected data and analytical techniques, theories explaining the identified relationships / mechanisms, as well as people conducting the evaluation (whose competences should complement each other). Providing diversity of this elements triangulation enables:

  • comprehensive knowledge and understanding of the studied object,
  • taking into account various points of view and aspects of the phenomenon studied,
  • supplementing and deepening the collected data,
  • verification of collected information,
  • increasing the objectivity of formulated conclusions.

 

3.2. What methods and tools are typically used in evaluation research?

To facilitate the choice of methods and tools most appropriate for a particular evaluation, below are the characteristics of the most popular of them:

  1. Qualitative methods
    • desk research,
    • individual in-depth interviews (IDI),
    • focus group interviews (FGI),
    • observation,
    • case study.
  1. Quantitative methods (surveys)
    • survey conducted without the participation of an interviewer – self-administered paper surveys, computer-aided web interview / online survey (CAWI), central location (simultaneously surveying all respondents),
    • questionnaire interviews conducted with the support of a pollster – paper and pen interview (PAPI), computer-assisted personal interview (CAPI) and computer-aided telephone interview (CATI).
  1. Active / workshop methods (mixed, i.e. qualitative and quantitative).

 

3.2.1. DESK RESEARCH

In the case of desk research existing data is used, i.e. data that was generated regardless of the actions taken by the evaluator.

The existing data includes internal data (generated for the needs of the evaluated project) and external data:

  • Internal data is information created during the preparation and implementation of project activities (e.g. project application, training scenarios, attendance lists, contracts, photos, videos and materials about the project posted on the website, posts and responses on social media). In the case of training projects for young people looking for a job, these may also be the results of measuring the competences of the beneficiaries at the beginning and at the end of participation in the training (knowledge tests, skills tests, attitudes tests, etc.)
  • External data is information that may relate to the studied phenomenon, processes or target group, but has been collected independently of the evaluated project (e.g. statistics, data repositories, reports, articles, books, videos, and other materials available on the Internet). In the case of the evaluation of employment projects, it is worth using information on similar projects, as well as data available to labour offices, social insurance institutions, national statistical offices, regarding the employment of young people living in a particular town.

Documentation analysis is the basic method of collecting information on a given project, also providing some knowledge about the needs of its recipients and the context of the evaluated project.

 

CONDITIONS OF APPLICATION:

Public institutions provide administrative data in accordance with the principle of transparency in the operation of public institutions and civic participation (open government concept). However, it is important to assess the data reliability and accuracy based on the methodological information provided in the source documentation.

 

ADVANTAGES:

  • accessibility (especially regarding information available on the internet),
  • large variety (you can use any data / materials related to the conducted evaluation),
  • no costs – most documents and data are available free of charge,
  • no evaluator’s effect on data in the case of external data.

DISADVANTAGES:

  • different levels of data credibility – you need to take into account the credibility of the source and the context of data acquisition (under what conditions, who collected and analysed the data and why),
  • restrictions on the access and use of internal information due to the protection of personal data, copyright and property rights.


3.2.2. INDIVIDUAL IN-DEPTH INTERVIEW (IDI)

An individual interview takes the form of a direct conversation between the interviewer and the respondent, usually conducted using a scenario. The interview allows you to obtain extensive, insightful and in-depth information, get to know opinions, experiences, interpretations and motives of the interviewee’s behaviour, examine facts from the interviewee’s perspective, as well as gaining a better understanding of their views.

IMPORTANT TIP

The language of the interview should be adapted to the respondent. In interviews (especially with young people) use simple language and avoid specialistic vocabulary (e.g. project jargon), that may cause misunderstanding of the questions asked and intimidate the interviewees.

 

CONDITIONS OF APPLICATION: Individual interviews should be conducted in quiet rooms that guarantee discretion. Interview recording is a common practice, but the respondent does not always agree – in such cases the researcher should take notes during the interview and complete them immediately after the meeting. It is recommended that the interview be conducted by an external expert to avoid situations in which the interviewee feels uncomfortable expressing honest opinions.

 

ADVANTAGES:

  • the possibility to discuss complex and detailed issues,
  • better understanding of the interviewee’s point of view (“getting into his/her shoes”),
  • getting to know facts in the situational context,
  • flexibility – the possibility to adapt to the interviewee and to ask additional questions not included in the scenario.

DISADVANTAGES:

  • unwillingness of some interviewees to express honest opinions due to lack of anonymity,
  • the impact of the interviewee’s personality traits on the findings obtained, e.g. difficulty in obtaining information from people who are taciturn, shy or introvert.

RESEARCH TOOL: the interview may be supported by an interview scenario, containing a list of questions or issues to be discussed. The interviewer can change the order of questions or add some questions during an interview if it is needed to better understand the issue.

Examle Of IDI Scenario For The Project Team

Individual IDI Scenario

 

3.2.3. FOCUS GROUP INTERVIEW (FGI)

A focus group is a conversation between about 6-8 people supported by a moderator who gives the group issues for discussion and facilitates its course. FGI participants are selected according to specific assumptions set by the researcher and their knowledge of studied issues.

IMPORTANT

In the case of young people, the discussion should be divided into shorter forms, involving all the participants, so that they do not get bored too quickly. It is worth using multimedia tools, elements of gamification or non-standard solutions, e.g. a paper cube with questions, thrown by the participants themselves. It is helpful to write down a group’s opinions on a flipchart and record the group discussion.

 

CONDITIONS OF APPLICATION: The basic condition for the success of a group interview is correctly selecting people with specific information that they are ready to share. It is important to guarantee that the participants are comfortable by organising the interview in a quiet room of the right size with comfortable seating, a large oval / square table and a flip chart.

 

ADVANTAGES:

  • learning about different points of view, taking into account different opinions,
  • mutual verification and supplementation of information about the facts discussed by different persons,
  • the opportunity to observe interactions between participants,
  • obtaining relevant information from several people in a relatively short time.

DISADVANTAGES:

  • dynamics of group processes, including pressure on group consensus / cohesion, may lead to minority opinions not being disclosed, e.g. due to the group being dominated by a natural peer group leader,
  • risk of transferring to group conflicts or bad interpersonal relations, reducing the effectiveness of the research and the reliability of the findings obtained,
  • organisational difficulties (the need to gather a group of people at a particular place and time and to provide a properly equipped room)*.

RESEARCH TOOL: the tool used by the moderator for this method is an FGI scenario, which includes the principles of group discussion, specific issues / questions and guidelines regarding various forms of activity in which the moderator is to involve the participants.

FGI Scenario


*Both IDIs and FGIs can be conducted by remote means using online communicators.

 

3.2.4. OBSERVATION

This method is based on careful observation and listening to the studied objects and situations (phenomena, events). The observation may be participant, partially participant or non-participant, depending on the degree of involvement of the researcher, who may act as an active participant in the events he or she observes or as an external, uninvolved observer. The observation can be carried out in an overt, partially overt or covert way*, i.e. the participants of the event may know that they are being watched or selected persons (e.g. trainer and / or training organiser) or only an observer know about it.

 

CONDITIONS OF APPLICATION: if the observation is non-participant, the observer should not come into contact / relations with the people being observed as this carries the risk of affecting the course of the observed events and behaviours.

 

ADVANTAGES:

  • providing information about a particular event / process during its course,
  • reporting facts without their interpretation by the participants (examination of actual behaviour, not declarations,
  • facilitating the interpretation of investigated events,
  • the opportunity to learn about phenomena usually hidden or unnoticeable or that people are reluctant to discuss.

DISADVANTAGES:

  • possible influence of the researcher on the course of events (the respondents’ awareness that they are being observed may change their behaviour),
  • limited scope of observation range, difficulty in accessing all events,
  • the risk of subjectivity (the researcher may assume some stereotypes, perceive and interpret events for the benefit of the observed group).

RESEARCH TOOL: The observation may be conducted using a research tool which is the observation sheet. Its use focuses the observer’s attention on selected issues and enables the recording of important information (e.g., the behaviour of people participating in the observed events), which may be not only qualitative, but also quantitative (the checklist).

Training Observation Sheet


* With regard to evaluation studies, we do not recommend covert observation, i.e. one that is not known to the people who are its subject.

 

3.2.5. CASE STUDY

This is an in-depth analysis of the studied issue using information from different sources and collected by various methods. Its findings can be presented in a narrative form. The analysed “case” could be a person, group of people, specific activities, a project or a group of projects.

The case study is used to:

  • get to know thoroughly and understand a particular phenomenon along with its context, causes and consequences,
  • illustrate a specific issue using a realistic example with a detailed description,
  • generate hypotheses for further research,
  • present and analyse best / worst practices to show what is worth doing and what should not be done.

CONDITIONS OF APPLICATION: This method requires time to collect and analyse various data regarding the phenomenon / object being studied, its context, processes, and mechanisms. Case studies are best used as a complementary method to other research methods.

 

ADVANTAGES:

  • is a source of comprehensive information on a given topic,
  • uses different points of view, which gives the description and analysis a wider perspective,
  • takes into account the context of the phenomena studied.

DISADVANTAGES:

  • usually requires the use of various sources of information, sometimes difficult to access,
  • it requires a lot of work and is time-consuming,
  • provides incomplete data results with low credibility of the described case.

 

3.2.6. SURVEYS CONDUCTED BY INTERVIEWERS

Quantitative methods are a standardised measurement method. Standardisation enables the collection and counting of quantitative data in a unified way, and also enables their statistical analysis. Standardisation covers:

  • Research tool (interview questionnaire) – the order, content and form of questions put to respondents,
  • The manner of recording respondents’ responses by selecting one option (on the scale) or several options from the “cafeteria” (a set of ready answers),
  • Behaviour of interviewers (pollsters) who are obliged to follow the instructions contained in the questionnaire during the interview.

Respondents’ opinions are transformed into numbers and saved in the database. Then, this information is analysed using statistical methods.

Questionnaire interviews are conducted by trained pollsters who read the respondents’ questions from the questionnaire and write down the answers that were obtained. There are the following techniques for this type of research:

  • Paper and Pencil Interview – (PAPI),
  • Computer-Assisted Personal Interview – (CAPI),
  • Computer-Aided Telephone Interview (CATI).

 

3.2.6.1. Paper And Pencil Interview (PAPI) and Computer-Assisted Personal Interview (CAPI)

Both of these techniques are field-based and are implemented in direct contact of the respondent with the pollster using a paper (PAPI) or electronic version of the interview questionnaire displayed on a laptop or tablet (CAPI). The pollsters read out the questions included in the questionnaire and then mark the answers given by the respondent.

 

CONDITIONS OF APPLICATION: a wide range of topics and a direct (F2F) meeting between the interviewer and the respondent is required. The best place for the interview is a place isolated from noise and the presence of third parties (in home / work conditions, make sure that bystanders, such as family members or colleagues, do not influence the respondents’ answers).

 

ADVANTAGES:

  • personal, close contact with respondents (the possibility to observe non-verbal signals, respond to misunderstanding of the question or tiredness of the respondent),
  • greater readiness of respondents for a longer interview and more difficult questions than during CATI,
  • with CAPI data is automatically saved during the interview.

DISADVANTAGES:

  • higher costs, including time and cost of travel and arranging a personal meeting with the respondent,
  • lack of a sense of anonymity of the respondent,
  • uncontrolled influence of the pollsters on the respondent’s answers (the interviewer’s effect*)
  • With PAPI the interviewer must manually enter the data from the questionnaire into the database after the interview, which is time-consuming, adds costs, and involves the risk of mistakes.

* This is the influence that the interviewer exerts on the respondent during the survey. The respondent unconsciously interprets the interviewer’s social characteristics (e.g. gender, age), assuming what is expected of him/her. The interviewer may also unknowingly send signals to the respondent suggesting the “right” answers.

 

3.2.6.2. Computer-Assisted Telephone Interview (CATI)

This type of interview is carried out by phone. The interviewer reads the questions displayed on the computer screen, and after receiving the answers marks them in the electronic questionnaire on his/her computer.

 

CONDITIONS OF APPLICATION: studying established opinions and attitudes, with the use of questions that do not require longer reflection due to the short duration of this interview (max. 10-15 minutes), as well as a specific channel of transmission and reception of information (no possibility of reading it several times at own pace).

 

ADVANTAGES:

  • shorter time and lower cost of reaching the respondent compared to face-to-face interviews (PAPI, CAPI),
  • time flexibility (the possibility to adjust the interview time to the respondent’s preferences, to stop the interview and continue it at a convenient time for the respondent),
  • easy management and control of pollsters’ work,
  • automatic saving (coding) of data during the interview.

DISADVANTAGES:

  • possible difficulty in obtaining respondents’ phone numbers (due to the lack of access and / or protection of personal data), and in the case of employers, no personalised contacts (having only the reception / headquarters phone numbers),
  • interview time limited to 10-15 minutes (due to shaky concentration and short duration of the respondents’ involvement),
  • the tendency of the respondents to choose extreme answers, or the beginning and end points on the scale (resulting from a specific channel of information transfer which enhances the ‘priority effect’ and the ‘freshness effect’).

 

3.2.7. SELF-ADMINISTERED SURVEYS

In self-administered surveys, the respondents read and mark the answers in the questionnaire on their own (without the pollsters’ participation).

 

CONDITIONS OF APPLICATION: these surveys can be carried out as a paper or online questionnaire (i.e. Computer-Assisted Web Interview – CAWI). In the case of the latter, respondents receive a link to the website with the questionnaire which they can complete on a computer, tablet or smartphone. After answering, the data is sent to the server where it is automatically saved in the database.

A very effective method of collecting quantitative data is a central location, which relies on questionnaires being filled in by people who are at the same time in one room, e.g. after completion of a training, workshop or conference. It is necessary to ensure that the respondents fill in the questionnaires themselves (without support from other people).

 

ADVANTAGES:

  • short time it takes to obtain information (especially in the case of a central location),
  • lower cost compared to questionnaire interviews conducted by pollsters,
  • sense of anonymity in people completing the survey,
  • no interviewer’s effect .

DISADVANTAGES:

  • respondents’ motivation to complete the questionnaire may decrease with no interviewer presence,
  • lack of control over the process of completing the survey*,
  • risk of consulting responses with other people**.
PRACTICAL TIP

The survey questionnaire must:

  • be short, easy, visually attractive to encourage a response,
  • have all necessary explanations, which in other methods are given by the interviewer,
  • have clear instructions (paper version) or algorithms (electronic) leading the respondent to the relevant questions (based on previous answers, irrelevant questions are filtered and omitted).

 

Questionnaire For Training Participants


* Instead of the right respondent, the survey may be completed by another person, which disrupts the representativeness of the sample.
** Especially in the case of a central location conducted without the researcher’s supervision.

 

3.2.8. ACTIVE / WORKSHOP METHODS OF GROUP WORK WITH YOUNG PEOPLE

 

Below we present additional active methods of collecting data (mainly qualitative), which can be particularly useful in group work with young people, because these methods are engaging, they integrate the team, facilitate cooperation and support the development of soft skills.

Active methods are workshop methods of collecting information that can complement the “classic” methods of evaluation research. They allow you to get quick feedback on a particular action, learn about the ratings, feelings and impressions of the participants as well as develop recommendations. These methods are worth using during workshops, training or conferences, in order to make the meeting more attractive, get to know the participants and better adapt the project activities to their needs.

 

ADVANTAGES:

  • speed – you receive instant feedback during the classes / meetings,
  • casual atmosphere,
  • the projective nature of tasks / questions makes it easier to formulate critical opinions and propose new solutions,
  • possibility to jointly collect qualitative and quantitative data,
  • stimulating self-reflection,
  • a positive impact on the well-being of participants (satisfying the need for expression, acceptance, integration).

DISADVANTAGES:

  • you cannot generalise the obtained opinions to a wider community (not participating in the meeting),
  • the need for an experienced trainer / moderator to moderate / facilitate,
  • the lack of anonymity of the participants in the case of group reporting and discussion (threat to mental well-being and group relations for people who are particularly vulnerable or have a weak position in the group).

Below you can find examples of active methods implemented in the form of a workshop.

 

CLOTHESLINE

The purpose of this tool is to get to know the expectations of the project audience. It is a visual method of collecting qualitative data.

Each participant receives drawings with clothes (e.g. shirt, underwear, trousers, socks), which symbolise the type of expectations they have towards the project – they may be, for example, hopes, fears, needs, suggestions, etc. Participants are given sufficient time to reflect and complete individual drawings / garments. After writing down their ideas, each of them “hangs their clothes” on a string hung or drawn in the room. Participants can read their expectations aloud and look at others’ “laundry”.

 

TELEGRAM

This tool allows you to quickly summarise part of the meeting (workshop, training) to learn about the mood in the group.

The participants are asked to think about a particular fragment of the classes and describe their reflections with three words: positive, negative and summative (e.g. intense – tiredness – satisfaction). Each person reads their words, which allows for a joint summary of the activities (you can write them down on post-its and stick them on a flipchart, etc.).

 

HANDS

The purpose of this tool is to find out opinions on selected aspects of the project or part of it (e.g. training, internship), as well as to summarise the course and effects of the classes. People participating in the workshop receive sheets of paper on which to draw their hands. Each of the fingers is assigned one assessment category, e.g.:

  • On the thumb – what was the strongest / best side of the training / project,
  • On the index finger – what I will tell my friends about,
  • On the middle finger – what was the weakest point of the training / project,
  • On the ring finger – what I would like to change (element needing improvement),
  • On the little finger – what I have learned or found out.

Participants enter their opinions on each of the fingers in accordance with the above categories. The exercise can be used to find out about the opinions of individuals and / or for group discussion.

 

EVALUATION ROSE

This method is used to gather feedback on many aspects of a project / activity at the same time. It is a visual method that allows you to collect quantitative data – assessments of various aspects of the assessed object using a joint scale.

Participants receive cards with an “evaluation rose” drawn. The drawing is inspired by the “wind rose” – instead of the directions of the world, it presents various aspects of the evaluated object (e.g. the usefulness of the training, how attractive the method of conveying the content is, the appropriate amount of time spent on training). Divide the axes into sections and assign to them selected values (e.g. scale 1-5, where 1 is the weakest grade and 5 – the best). Participants are asked to indicate their views on each axis of the “evaluation rose”. Then you can combine the points and get a visually attractive picture of your opinions (the final effect resembles a radar chart).

 

TALKING WALL

The purpose of this method is to gather opinions on the value of a particular project activity or the entire project. Thanks to its application, you can obtain qualitative data (types of opinions) and quantitative data (how many people share a particular opinion).

Hang five large sheets of paper on the wall. On each of them, put a question about the conducted activities, e.g.:

  • Sheet 1: What new things did you learn during the training?
  • Sheet 2: How will you use the knowledge acquired during the training?
  • Sheet 3: What did you like the most about the training?
  • Sheet 4: What did you like least about the training?
  • Sheet 5: What would you change in this training?

Participants write down their answers on each sheet or – if the opinion is already on them – add a plus / dot next to it. At the end, the facilitator summarises the entries and encourages the group to discuss them and develop their recommendations. This form of collecting opinions encourages more openness, participants gain a sense of agency and overcome reluctance to speaking in public.

 

RUBBISH BIN AND SUITCASE

With this method, you can get a summary of training or other project activity. It allows you to collect information on elements that were useful, redundant or considered missing for the participants.

Draw a suitcase, rubbish bin and sack on the blackboard / flipchart. Each of the figures symbolises one category of opinion about the evaluated activity:

  • Suitcase: “What do I take with me from the training?” (what will be useful to me, what will I use in the future)
  • Rubbish bin: “What was unnecessary during the training?” (what is not useful to me, what was redundant),
  • Sack: “What was missing?” (what should be added to the next training).

Then you can ask the participants to speak or write down their opinions on sticky notes or directly on the pictures on a flipchart.

 

PRACTICAL TIPS FOR CONDUCTING GROUP ACTIVITIES

It is good for the participants to sit in a circle so that everyone can see each other. To increase their involvement, you can propose that they themselves indicate the next person to talk, e.g. by throwing a ball (this solution can be used provided that no one in the group is discriminated against). Oral statements should be noted down – this can be done by the person conducting the classes while they are taking place (e.g. on the blackboard, flipchart) or by their assistant.

 

3.3. How to choose appropriate research methods

Research methods must fit well with the evaluation concept and plan. To make the right choice, consider whether the methods are relevant to:

  • The purpose, subject, scope and type of evaluation, as well as the criteria and evaluation questions – will these methods provide you with the information necessary to answer your evaluation questions?
  • The data sources from which you plan to obtain information – will it be appropriate to provide information on the groups that will take part in the evaluation research?
  • The characteristic of the interviewees / respondents – do the methods take into account group size, their perceptive capabilities, communication abilities, health condition, etc.?
  • The circumstances of the data collection – will all the necessary data and interviewees / respondents be available at a particular moment? Will the chosen method suit the place of data collection?
  • The resources you have access to? – does the method require availability of qualified or independent researchers and other resources (organisational, technical, financial and time)? Will you be able to use the method on your own? Do your resources make you able to use it?

Knowledge of research methods (quantitative and qualitative) and related tools will help in preparing the second part of the evaluation concept (see chapter 2.4, tool 4), which will be supplemented with methodological issues. This element enables you to gather information to answer evaluation questions.

Tool 6: Logic Matrix Of The Evaluation Research

 

3.4. How to design research tools

A common mistake is to start an evaluation by creating research tools, e.g. a questionnaire for project recipients. You must remember that you will not be able to choose the right research methods or prepare the right measurement tools (e.g. scenarios, questionnaires, observation sheets) in isolation / detached from the overall concept of evaluation. Therefore, start constructing research tools after determining:

    • The subject, scope and purpose of the evaluation,
    • Evaluation criteria and questions,
    • Studied groups of people and research methods.

Without referring to the above elements, you are not able to create correct research tools, because you may include questions that are unrelated to the purpose of the research, making it impossible to answer evaluation questions and respond to evaluation criteria. “Bad” tools contain useless questions, are overloaded or incomplete, do not provide relevant information and do not allow for the formulation of meaningful recommendations.

The questions included in the research tools are a particularisation of the evaluation questions. Remember that these questions evaluators ask themselves, not the respondents! These two types of questions should not be confused as they are formulated in languages adjusted to the needs of:

  • Evaluators / evaluation stakeholders → evaluation questions,
  • Studied groups of persons (interviewees, respondents)→ questions in research tools.

If you are not sure whether a particular question should be put to the interviewees / respondents, consider whether they will be able to answer it, and the information obtained will allow you to answer the evaluation questions and formulate useful recommendations.

 

HOW TO ASK QUESTIONS

  • The number of questions included in the tools should be appropriate to the purpose and duration of the research.
  • Research tools should have a transparent structure, with the main issues identified (e.g. “reasons for joining the project”, “assessment of different types of support”, “effects of participation in the project”). Topics should be grouped thematically (e.g. organisational issues).
  • Questions should be asked in a specific order. Put preliminary questions (relatively easy) at the beginning of your tool. They should be followed by introductory questions in the subject (not very difficult), then main questions (key for the purpose of the research). Put the most difficult questions in the middle of the tool. Finally, ask summary and closing questions.
  • Questions should be asked in a logical order that cannot surprise or confuse the research participants. Each question should follow on from the previous one or – in the case of an interview – refer to the respondent’s statements.
  • The language of an interview should be easy to understand: use as short sentences as possible, use a language close to the research participants – without foreign words, specialised terminology, jargon, abbreviations.
  • Questions should be formulated precisely – e.g. there should be no doubt what period of time they relate to (don’t ask “whether recently …”, but “whether in the last week / month / year …”)
  • Do not ask for several issues in one question (“what are the strengths and weaknesses of the project?”) and do not use negative questions (“shouldn’t you …”, “don’t you prefer …”). Each of these errors makes it difficult to understand the questions and interpret the answers.
  • Questions and proposed answers must not be sensitive to the research participants – they cannot lead to the disclosure of traumatic experiences, declaration of behaviour or beliefs contrary to the law or morality. When anonymity is not guaranteed, do not ask about property status, family matters or health issues.
  • Do not ask questions suggesting an answer – do not present any of the options as being in accordance with the rule of law or morality, do not refer to the authorities or the opinion of the majority.

The differences between quantitative and qualitative research tools, the structure / construction of scenarios and questionnaires and the most common mistakes in their design are discussed in the online course.

IV. CONSIDERATIONS WHEN EVALUATING PROJECTS AIMED AT YOUNG PEOPLE AGED 15-24

 

Considerations When Evaluating Projects Aimed At Young People Aged 15-24

 

When undertaking the evaluation of projects aimed
at young people aged 15-24, you should take into account that people of that age are different from adults, mostly because of their legal situation, living and technological conditions, and psychological and social needs related to intensive development processes on the verge of adulthood.

 

4.1. What are the standards of conducting research on young people?

The United Nations Convention on the Rights of the Child and many additional provisions in individual countries guarantee special legal protection for persons under the age of 18. According to the law, a person under the age of 18 is a child. Although in most countries one acquires certain rights at the age of 15 (for example the right to choose one’s school, the right to take up work), a minor’s participation in YEEAs projects as well as in various types of research requires the consent of their parent or legal guardian.

 

4.1.1. Consent for a minor’s participation in evaluation research

  1. Consent for participation in evaluation studies from both the minor and his/her parent or legal guardian must refer to the specific research (name of the research or evaluated project and the entity or entities conducting it).
  2. The person giving consent for a minor’s participation in the research should receive all the necessary information, such as:
    1. The purpose of the research and how the findings will be used,
    2. The scope and method of collecting information to be obtained from the research participant, including whether the research requires multiple contact with the participant, especially a long time after the first round of research,
    3. Assurance of anonymity and protection of confidentiality of data obtained about the participant in the research,
    4. Information about the right to refuse to participate in the research and to withdraw from participation at any stage.
  3. It should also be remembered that in EU countries it is necessary to obtain consent for the processing and storage of personal data.
  4. If it is planned to use sound and video recording devices – also explicit consent must be given.
  5. Examples of documents used to obtain consent for a minor’s participation in research are included in the Annexes (Annexes 1 and 2).

It is worth obtaining such consent at the beginning of the evaluated project because it can be obtained with more general consent for a minor’s participation in the project (e.g. in the same document).

 

4.1.2. Protection of minors in the ethical codes of professional researchers

The basic guidelines for conducting research among people under 18 are:

  • Obtaining informed consent (described above) from the minor and their legal guardian,
  • Providing a sense of security to those examined by the research staff (e.g. the researcher does not attempt to make first contact with minors without the presence of the adult responsible for the child (teacher, guardian, parent); the person collecting the information has documents confirming their status as a researcher; the training and experience of the people conducting the research guarantee the safety and the way of carrying out the research appropriate to the specificity of young people),
  • Ensuring that all the information provided, including the questions put to the interviewees / respondents, can be understood (it is helpful in this respect to test quantitative tools on a small scale before applying them and to discuss the tools with specialists),
  • Ensuring that the scope or method of obtaining information from young people will not directly cause any material or non-material harm, including harm related to mental well-being and social relations; this applies in particular to such issues as:
    • Sensitive issues that lower the sense of autonomy or self-esteem,
    • Relationships with their peer group and other important people.

If you have any doubts, it is worth consulting specialists.

  • Compliance with the general principles of social research, including in particular:
    • Guaranteeing the confidentiality of information obtained from the research participants both at the stage of data collection (no participation of other people apart from the researchers and the respondents during data processing (anonymisation/pseudonymisation), as well as in publishing the findings (collective presentation of quantitative data, pseudonymisation of qualitative data),
    • Ensuring the anonymity of the research participants,
    • Ensuring the safety and undisturbed work of the researchers.
  • Standards for conducting research on minors are included in the codes of ethics in force in the communities of professionals conducting social and market research.

4.2. How to adjust the methodology of evaluation research to a young person’s way of life?

 

4.2.1. Major activity – formal education

Studying is the dominant activity in the life of young people aged 15-24. For instance, in Poland, until the age of 18, participation in formal education is compulsory, although training in the form of “vocational preparation” combined with paid work is also allowed. However, the findings of the Labour Force Survey show that the vast majority of those aged 18-24 still participate in organised forms of education. Young people study full-time in schools or colleges, but often also part-time, attending courses or training. Also, many of the YEEAs activities are conducted in the form of group learning activities. Grouping the beneficiaries of the evaluated project in one place and time allows you to carry out various types of activities related to evaluation, primarily to collect data through observation, central location, focus group interviews, etc.

However, you should bear in mind that when conducting research in educational institutions, you should ensure there are appropriate conditions for collecting data, such as: an isolated room, dedicated time (respondents should not be under time pressure).

 

4.2.2. Weak position on the labour market

One of the basic elements of the situation of young people, which is also the main area of influence of YEEAs projects, is their situation on the labour market. In studies devoted to this subject, in relation to young people it should be taken into account that:

  • In the 15-24 age group, only about every third person performs any paid work (including free help for a family member’s paid work) – so you should never ask questions with the assumption that a particular person is working or has income from work,
  • Work by young people, especially those under the age of 18, occurs in highly diversified, often atypical forms, e.g. as free help in the paid work of a close family member, as a one-time job, occasional work, holiday work, part-time work, replacement, “trial” work, various types of internships, apprenticeships and vocational preparation, in which the proportion of study to work and earnings vary widely and may or may not be considered work, providing work in exchange for accommodation, food and “pocket money”, promoting products or services on social media in exchange for the goods or services received, voluntary work with various levels of covering own costs, work performed under various contracts, ranging from regular employment contracts to specific contracts, undeclared work such as tutoring, income for illegal activities.

When asking young people about work, you need to precisely define what kind of activity you consider to be work and / or what features are decisive for you (legality, type and amount of remuneration, time dimension, stability, linkage with educational obligations, legal form).

 

4.2.3. Increased mobility

People aged 15-24 change their place of residence much more often than older people. They also exhibit higher than average daily mobility. As a result, traditional methods of collecting quantitative data based on a home address in the case of young people do not work – a postal questionnaire is often sent to an address that no longer applies, the interviewer comes when no one is there.

Therefore, in the case of young people, it is particularly important to obtain their mobile contact details, such as a phone number or the name of an individual profile on a messaging app, and then base a data collection strategy using electronic tools on these contact details. The findings of studies using both ae postal questionnaire and the CAWI method show that the response rate in the case of the latter is much higher and it increases the lower the respondent’s age.

 

4.2.4. Dominance of smartphones in everyday communication

Young people are more willing than older people to use electronic technologies than paper. They are also much more efficient at this and are more willing to deal with all matters of everyday life using a smartphone than a computer. Therefore, in research among young people it is worth using electronic research tools, and best to adapt them to smartphones (one simple question per screen, simple and legible form, not too long a list of answers). One example of such an application that can be used for working with young people is Kahoot.

 

4.2.5. Busy and overstimulated life

A characteristic feature of modern youth is their openness to many stimuli delivered via smartphones, which young people never let out of their sight. Moreover, learning, developing one’s own interests, and above all social life, often result in stimulating how young people function by forgetting about unusual or less important obligations, such as filling out a questionnaire. To counteract this, it is important to regularly send messages reminding participants about the dates of scheduled interviews, their promises to complete a survey, etc.

 

4.2.6. Widespread use of social media

The widespread use of social media by young people, including their presence in numerous social media groups, is increasingly being used for research purposes. It is possible to find groups of young people from a particular locality or school, as well as those with specific musical and ideological interests, etc. After entering the group, the possibilities of recruiting research participants (e.g. to the comparative group) open up. You may consider asking individual group members a question as a researcher, or (if the group moderator agrees) publicly posting a link to the online survey or request for contact. It is better not to open a public discussion at the Internet group level as this prevents the research from being confidential, exposes the participants to being assessed by other group members, and the public nature of statements lowers their credibility.

Following the example of market research agencies, you could also consider establishing a special community group (MROC method – Market Research Online Communities), in which young project beneficiaries would agree to participate. However, such activities require a precise definition of the group’s goal. If the purpose is research – then it should be a short-term group (MROC), and during this period it should be professionally moderated, similarly to Focus Group Interviews (FGI).

 

4.2.7. Difficulties in reaching NEETs

Difficulties characteristic for research among young people intensify when the evaluated project is aimed at young people who are not studying or working, who are not covered by any form of education, support or institutional supervision that groups them (NEETs). Reaching young people who are in such a situation is a serious challenge, especially when you need data for comparisons with NEETs who participate in the project.

Often, the only solution to this type of problem is to compare groups participating in different projects from the same programme, or to compare the results obtained in the group covered by the project with the group of candidates who did not become its beneficiaries (taking into account the impact of the reasons for not qualifying for the project).

4.3. How to deal with the psychological and social needs of young people

4.3.1. Increased need for confidentiality of the provided information

The key psycho-social factors that should be taken into account when planning and conducting research involving young people is their particular susceptibility to influences. This results both from their emerging personality as well as from a fear of judgement and even sanctions that may befall a young person both on the part of the peer group and adults, on whom the young person depends mentally and financially. The latter include project staff. Taking this into account, one should:

  • Inform the research participant about the confidentiality of the information provided and the measures taken for this purpose, both by means of data collection ensuring confidentiality, as well as their anonymization at the stage of data analysis and use of the findings,
  • Complete complex assurances, including by conducting interviews (IDI, FGI) without the participation of third parties, creating conditions for completing the questionnaires that guarantee anonymity and confidentiality, including throwing auditorium questionnaires into a collection box,

 

4.3.2. Increased need for autonomy and emancipation

According to the findings of developmental psychology, people aged 15-24 are – due to shaping their identity – particularly sensitive to issues related to respect for their freedom. Consequently, their right to participate or not to participate in research should be clearly communicated and the reasons and consequences of each of the choices available should be clearly explained. This is a necessary condition.

On the other hand, positive motivation for young people to participate in evaluation research can be created by responding to their needs to move from subordinate and executive positions to the role of co-decision makers and co-creators. In order for young people to be really involved in evaluation research you have to treat them as partners with different roles, including decision-making and consultative roles, in addition to the roles of the classic examined object. This can be achieved by involving them in the various stages of the evaluation process, from reporting information needs, through co-deciding on priorities, planning, participating in implementation, and finally consulting the findings (see section 2.2).

V. DATA ANALYSIS

 

Data Analysis

 

Once you finish collecting the data, you should start analysing it. This means using all the research material (information obtained with various methods) and answering evaluation questions as well as valuing the evaluated project according to chosen criteria. Therefore, at this stage, it is worth going back to the evaluation concept, which acts as a compass, leading the evaluator through the entire research process (not only information collection, but also data analysis, drawing conclusions and formulating recommendations).

 

The purpose of data analysis is:

  • Compilation and verification of collected information,
  • Description, assessment and juxtaposition of the quantitative and qualitative data that is obtained (checking how reliable and consistent they are),
  • Identification and explanation of various cause and effect relationships that will allow you to understand the mechanisms of the studied phenomena,
  • Interpretation of the obtained evaluation findings in relation to wider knowledge about the subject of the evaluation (evaluandum),
  • Obtaining detailed answers to evaluation questions and credible valuing of the evaluandum according to chosen criteria,
  • Drawing conclusions from the collected information and formulating useful recommendations based on it.

In the data analysis, you should bear in mind the principle of triangulation, i.e. the compilation of data obtained from various sources, using various research methods, by different researchers. Thanks to this, you have the opportunity to supplement, deepen and verify respective information in order to obtain a full picture of the evaluated project.

Although during data analysis the actions undertaken are common to both types of data (quantitative and qualitative), such as reduction, presentation and concluding, the obtained findings are in a different form for each of them. The comparison of these data is presented in the table below.

Before starting the data analysis, it is necessary to check whether all research materials have been anonymised, i.e. there are no personal data (names, surnames, addresses, including e-mail addresses, telephone numbers etc., as well as contextual information enabling the identification of research participants). Interviewees who participated in the qualitative part of the research (IDIs, FGIs) are given pseudonyms, e.g. taking into account the features important for the researcher. The personal information concerning research participants should be separated from the content data provided by them.

 

There are four main stages of data analysis:

1. Selection and ordering of the collected research material – during this stage, the correctness and completeness of the data are checked, the reliability of every piece of information is verified (thanks to triangulation), and data that is not useful for the purpose of the evaluation is removed. You should collect all the information and facilitate its further analysis – recordings of the interviews can be transcribed or written down in accordance with a previously prepared scheme (which includes a summary of the respondents’ statements). In the case of a survey, you should remove uncompleted questionnaires from the analysis, etc.

2. Constructing analytical categories (selecting the type of encoding and data coding – their categorisation and classification) – this means assigning codes / “labels” to each piece of information obtained, representing specific categories of information, thus allowing for the organisation of the research material.

  • In the case of closed-ended questions, the answer codes take a numerical form (e.g. “female” = 1, “male” = 2), which allows you to analyse the obtained data using statistical programs (or spreadsheets). First, you need to create a coding instruction that contains the names of codes and the numbers which were used in the questionnaire to identify answers given by the respondents to particular questions. Paper surveys require manual coding – to do this, you need to number the answers in the questionnaire, code the answers and enter this information into the database. Electronic surveys are coded automatically.
  • In the case of open-ended questions and other qualitative data, the codes for particular answers have verbal form (e.g. “training organisation”, “conducting a training”). Codes for qualitative data can be planned before or after reading the entire material. The first method is called “top-down” coding, which results from a good knowledge of the research problem and / or its grounding in a given theory. The second method is open coding (“bottom-up”), which consists of categories identified in the collected material (e.g. relating to research questions). In both cases, you need to develop a coding scheme that will organise the codes (establish a code hierarchy, superior / collective and detailed codes), so that you can present the collected information in a consistent form.

The information corresponding to the given codes can be summarised in one table, which will facilitate the search for similar or common elements for the research participants as well as information that differentiate them. It also allows you to see the relationship between the interviewees’ characteristics or situation and their statements.

Tool 7: Table For Summarising Information From Interviews

3. Analysis and interpretation of the obtained findings (explanation and assessment by the researcher of a particular issue / problem)

Data analysis is an important element of evaluation because it allows you to summarise the findings and find common and divergent elements in the collected materials. It is worth choosing and describing the method of data analysis at the stage of planning the evaluation. Data obtained during evaluation can be analysed in a number of ways. The simplest distinction is division into:

  • Quantitative data analysis (numbers, answers to closed questions) – for simple analyses you can use, for example, MS Excel, and for more complex analyses statistical programs, such as SPSS or Statistica, operated by specialists, whose services can be used if necessary.
PRACTICAL TIP

For small groups, quantitative data should not be presented in the form of percentages, i.e. informing that 20% of respondents in a group of ten have a particular opinion. Better to use absolute numbers and say that it is two people.

  • Qualitative data analysis (e.g. text, interview statements) – for simple analyses, it is enough to compile the data in a chart / matrix, and for more extensive research material, it is worth using programs that facilitate the analysis, e.g. QDA Miner, OpenCode, Weft QDA.

Some of them are briefly presented in the table below:

Own elaboration based on: Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

 

IMPORTANT TIP

When analysing the data, it is very important to determine what changes have occurred as a result of the project and what role respective activities played in them. Therefore, it is necessary to answer the question to what extent the project activities influenced the achievement of the assumed result indicators and what was the role of project activities among other factors influencing the expected changes (see chapter 2.5).

 

When analysing data, it is worth referring to the previously described theory of change adopted as part of the description of the project logic. When planning the change at the beginning of the project, you made certain assumptions about the conditions that must be met (resources provided, implemented activities) in order to achieve the given results, i.e. you have planned the cause-and-effect chain. Evaluation verifies our theory of change – it can confirm it or show some gaps in it (e.g. missing / redundant elements) and recommend improvements for the future.

There are three general strategies for causal inference. Using a combination of these strategies can help to increase the credibility of the conclusions drawn:

Data analysis approaches for causal attribution with various options

Own elaboration based on: Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.

VI. REPORTING

 

Reporting

 

6.1. How to make use of the results of data analysis?

After completing the qualitative and quantitative data analysis stage, you have a lot of information, which should be used properly and wisely. These data should be translated into knowledge that will allow you to make accurate decisions regarding project improvement (e.g. how to adapt it better to the needs of its recipients, how to achieve similar effects using smaller resources, how to obtain greater impact and sustainability of the results).

Based on the findings of conducted analyses, you can draw conclusions that relate to phenomena or problems identified during the evaluation. These conclusions relate primarily to the issues described in the evaluation questions but may also include issues that were additionally diagnosed during the research.

In the evaluation report, you should present not only the findings of the evaluation research, but also their interpretation (i.e. reference to a broader knowledge of the studied issue), as well as the conclusions derived from the obtained data and the accompanying recommendations. The above diagram presents the relationships between these elements. To get through this process, you can use the questions that accompany the subsequent stages (in the diagram above they are marked in italics).

Below you can find an example of the process of formulating conclusions and recommendations regarding a training project directed to NEETs (the findings refer to the quantitative part of the research).

Tool 8: The relation between the evaluation’s findings, their interpretations, conclusions and recommendations

Remember to take into account various elements related to evaluation research, e.g. used methods (qualitative, quantitative), sample selection methods and degree of responsiveness (level of return of questionnaires), which may lead to some limitations when formulating conclusions.

 

RULES FOR FORMULATING THE CONCLUSIONS:

  • Treat your conclusions critically, look at them from a distance, constantly seeking alternative explanations for the phenomena found. It is always worth consulting your conclusions with another, preferably more experienced person (“a critical friend”) who – thanks to not being involved in the evaluation – will look at them with a “fresh eye”.
  • Make sure that you correctly interpret the statements given by the research participants, e.g. by confronting the conclusions with them. If you are not completely sure about a conclusion, soften it by using the terms “probably”, “possibly”, “maybe”.
  • Do not generalise the conclusions for the whole population (i.e. people who did not participate in the research) if you used qualitative methods* or the sample you studied was not randomly chosen.
  • Learn how to avoid mistakes in drawing conclusions from our online course.

HOW TO FORMULATE THE RECOMMENDATIONS?

  • Group them thematically (e.g. project management, cooperation with partners, implemented activities, project effects).
  • Relate them to both strengths and weaknesses of the subject of evaluation. Don’t focus only on the negatives – also show those areas that work well and don’t need any changes. If you concentrate solely on positives, it will undermine the credibility of the evaluation.
  • Make sure that recommendations are detailed, precise and realistic (possible to implement), so that they are also practical, accurate and useful.
  • Assign to each recommendation: a recipient (with whom it will be agreed in advance), a deadline, and a degree of importance, as this increases the chances of them being implemented.

* In this case, the conclusions relate only to the persons who participate in the research.

 

Conclusions and recommendations can be presented in a concise table as a summary of the report, or as an independent “final product” of the evaluation. The following is an example of a recommendation table regarding the evaluation of a training project:

Tool 9A: Recommendations Table

In the simplified version, the table of conclusions and recommendations may look like this:

Tool 9B: Simplified Recommendations Table

 

6.2. What are the features of a good report?

The report is the finalisation of the evaluation process, because it presents its concept, course of research and its findings, as well as conclusions and recommendations that are based on them.

During the evaluation process, various types of reports may be written, e.g.:

The final report can be prepared in various forms, which – like the scope of content presented in them – should be tailored to the needs of individual groups of recipients (evaluation stakeholders). Examples of ways to present and promote evaluation findings include:

    • The final report in an electronic version (less often in a paper version) distributed to stakeholders and / or posted on the Internet (e.g. on the project website or the entity ordering the evaluation website),
    • Summaries of reports in the form of information folders / brochures containing key conclusions and recommendations,
    • A multimedia presentation during conferences and meetings, e.g. with stakeholders, partners,
    • An infographic posted on the project website, on social media, and sent to local media,
    • Printed posters presented at various events, e.g. conferences, picnics,
    • Films (video presentations) addressed to large audiences (including a dispersed audience), and posted on the Internet,
    • Follow-up – presentation on the effects of implementing the recommendation.

The report in the version of the extended text document may have the following structure:

    • Title page – name of the contracting institution, name of the institution conducting the evaluation (if the evaluation was external), date of preparation, authors, title (e.g. Ex-post evaluation of project X),
    • (Executive) summary – main elements of the evaluation concept, key findings, conclusions and recommendations (necessary for extensive reports),
    • Table of contents – enabling automatic access to a given page of the report,
    • List of abbreviations (and possible definitions of specialised terms),
    • Introduction – information on the commissioning institution, type and cut-off date of the evaluation, name of the evaluated project, sources of its financing, and organisation that has implemented it,
    • Subject and scope of the evaluation – a brief description of the evaluated project and its parts which were included in the evaluation,
    • Goals of the evaluation – explanation of what the evaluation was conducted for, what was expected of it,
    • Evaluation criteria and questions – an indication of how the value of the subject of the evaluation was estimated / what was supposed to be learnt through the evaluation,
    • Methodological issues – description of sources of information and research methods used, sample selection methods, course of the research, levels of responsiveness (what percentage of respondents participated in the survey).It is also worth describing the problems encountered during the implementation of the research, as well as the ways and effects of dealing with them,
    • Description of evaluation findings – a description of the qualitative and quantitative findings collected during the research, along with their interpretation, according to the adopted method of presentation (e.g. in accordance with evaluation criteria / questions). Findings from different sources and obtained with different methods should be confronted (by triangulation). Every chapter can present partial summaries,
    • Conclusions and recommendations – a concise but substantive answer to evaluation questions. The conclusions must be based on the findings of the study and the recommendations should be closely related to them,
    • Attachments / annexes (optional) – e.g. research tools used, tabular summaries, case studies, etc.

It is worth remembering that regardless of what form of report you choose, both in the case of external and internal evaluation, any changes to the content of this document require the consent of the evaluator.

  • If you want to learn more about the table of comments to the evaluation report, click here.

A good evaluation report should meet the following conditions:

  • be adequate to the terms of the contract and the needs of the recipients, be written in a language they understand,
  • contain a list of abbreviations used (and possible definitions of key terms when, for example, a report is to be presented to a wider audience that may not know them),
  • have a clear and legible structure,
  • have a concise form, and at the same time comprehensively answer evaluation questions (without “waffling”),
  • be based on credible and reliable findings that have been properly analysed,
  • present not only the obtained findings, but also its interpretation, as well as indicate the relationship between the data and the conclusions,
  • contain justified conclusions and useful recommendations related to them,
  • contain graphic elements (tables, charts, diagrams) and quotes from respondents’ statements that make the reception of the report content more attractive.

The following table will help you in verifying the quality of the evaluation report. It contains detailed criteria for its assessment. You can choose its scale (numeric or verbal) and assess your own or a commissioned report.

Tool 10: Report Quality Assessment Table

 

6.3. How to deliver what is needed for the recipients of your evaluation

The possibility of using evaluation findings depends on its type, i.e. the moment / life cycle of the project in which the evaluation is carried out.

The most chances for introducing changes are provided by ex-ante evaluation, carried out at a time when the evaluated undertaking / project has not yet started.

In the case of mid-term evaluation, the opportunities for using recommendations to introduce specific changes are limited as the project is in progress and individual actions are gradually implemented. Nevertheless, some of its elements may still be modified, e.g. in order to better adapt the ongoing activities to the needs of their beneficiaries, to ensure that the planned indicators are achieved at the assumed level, or to adapt them to the changed project implementation conditions.

The findings of ex-post evaluation can only help you in planning the next (same or similar) projects because the evaluated project has already been completed.

When evaluation findings are related to organisational or management issues, you can use them for current work.

The dissemination of evaluation findings (most often in the form of conclusions and recommendations) among its stakeholders is a very important stage, as it contributes to a better understanding of the need for change, to strengthening the cooperation, commitment and motivation to act, as well as to obtaining support in this process.

Sharing the findings of the evaluation with other people / entities may show your ability to self-reflect on the value and quality of your activities. It is a sign of your readiness to engage in discussion on various aspects of the subject of the evaluation, as well as the ability to assess its strengths and weaknesses and the desire to develop and improve in cooperation with other stakeholders.

Tool 11: Dissemination Of Evaluation Findings Table

AFTERWORD

 

Afterword

 

If you are reading these pages, you probably have read the whole thing and have learned how to conduct evaluation of your projects and what it is for, especially if these are youth employment projects and even more if you are interested in assessing their real (net) impact.

Thanks to the participatory approach to the evaluation you acquire information that is vital for key decisions about the project and also very important for the stakeholders, especially the donors. What is more, the beneficiaries get empowered and the project team get better informed, coordinated and motivated. Finally, you are on the way towards a more relevant, effective, sustainable, efficient and simply better project!

To make it easier to prepare your evaluation – you can use templates of evaluation tools – see Attachments. And to make your understanding of the evaluation even deeper – check the online course and networking activities of the Youth Impact project – all available at the website www.youth-impact.eu.

Learn more

Interesting sources to learn more about evaluation available online:

REFERENCES

 

References

 

  • Babbie E. (1st edition in 1975) The Practice of Social Research
  • Babbie E. (1st edition in 1999) The Basics of Social Research.
  • Bartosiewicz-Niziołek M., Marcinkowska-Bachlińska M., et al (2014) Zaproszenie do ewaluacji, zaproszenie do rozwoju [Invitation to evaluation, invitation to development], KOWEZiU, Warszawa (s.69-85)
  • Bartosiewicz-Niziołek M. (2012) Ewaluacja programów i przedsięwzięć społecznych – katalog dobrych praktyk [Evaluation of social programmes and undertakings – catalogue of good practices], ROPS, Kraków
  • Bienias S., Gapski T., Jąkalski J. (2012) Ewaluacja. Poradnik dla pracowników administracji publicznej [Evaluation. A guide for public administration employees] Ministerstwo Rozwoju Regionalnego, Warszawa
  • Blalock H. (1st edition in 1960) Social Statistics.
  • Checkoway, B., Richards-Schuster, K. Participatory evaluation with young people, W.G. Kellogg Foundation
  • Ferguson G. A., Takane Y. (1st edition in 1971) Statistical analysis in psychology and education.
  • Flick, U. (1st edition in 2007). Designing Qualitative Research.
  • Flick, U. (1st edition in 2007). Managing the Quality of Qualitative Research.
  • Gibbs, Graham R. (2009) Analyzing Qualitative Data.
  • Kloosterman, P., Giebel, K., Senyuva, O., (2007) T-Kit 10: Educational Evaluation in Youth Work, Council of Europe Publishing
  • Kvale S. (2007) Doing Interviews.
  • Lisowski G., Haman J., Jasiński, M (2008) Podstawy statystyki dla socjologów [Basics of statistics for sociologists], Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Jak napisać raport ewaluacyjny [How to write an evaluation report], Ministerstwo rozwoju Regionalnego, Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Język raportów ewaluacyjnych [The language of evaluation reports]. Ministerstwo rozwoju Regionalnego, Warszawa
  • Miles, M. B., Huberman, A. M. (1st edition in 1983) Qualitative Data Analysis.
  • Nikodemska-Wołowik A., M. (1999). Jakościowe badania marketingowe [Qualitative marketing research], Polskie Wydawnictwo Ekonomiczne, Warszawa
  • Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.
  • Rapley T. (2007) Doing Conversation, Discourse and Document Analysis.
  • Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.
  • Silverman, D. (1st edition 1993) Interpreting Qualitative Data.
  • Wieczorkowska G., Kochański, P., Eljaszuk, M. (2005) Statystyka. Wprowadzenie do analizy danych sondażowych i eksperymentalnych [Statistics. Introduction to the analysis of survey and experimental data], Wyd. Naukowe Scholar, Warszawa
  • W.K. Kellogg Foundation (2004). Logic Model.

Empty tab. Edit page to add content here.
Empty tab. Edit page to add content here.