YOUTH EMPLOYMENT EVALUATION TOOLKIT

Your leverage to better youth employment projects

 

TOOLKIT PDF

TOOLKIT E-BOOK

 

Authors: Monika Bartosiewicz-Niziołek, Sławomir Nałęcz, Zofia Penza-Gabler, Ewa Pintera

 

INTRODUCTION

The purpose of this toolkit is to present practical tools supporting the evaluation of projects aimed at increasing the employment of young people (aged 15-24), including those who face difficulties in the transition between school education and work (NEETS).

The main recipients of this toolkit are NGOs and other entities which want to analyse their projects in the abovementioned area. Such evaluation may be aimed at:

  • Measurement of the project’s effectiveness in achieving project goals and results (outputs, outcomes),
  • Assessment of the usefulness of the project for its beneficiaries / participants and the sustainability of the achieved results,
  • Better adaptation of the project to the needs of its beneficiaries and the labour market,
  • Examination of the project impact on a wider group of people who did not participate directly in it (e.g. families, friends of the project beneficiaries),
  • Assessment of project efficiency in terms of resources engaged in the project and its effects.

This toolbox is a supplementary material to the course “Towards better youth employment projects – learning course on evaluation”, available HERE. While during the course you can get knowledge and training in evaluation adjusted to your needs (basic or advanced level), the toolbox provides some universal knowledge matched with practical instructions, tools and examples designed to develop evaluation skills and to support you in using the knowledge acquired during the distant course. This is achieved, among others, by question sets, tables and tool templates facilitating the design and planning of evaluation, gathering the necessary information, and then formulating conclusions and recommendations aimed at improving the projects carried out by your organisation.

The toolbox has been developed by the Jerzy Regulski Foundation in Support of Local Democracy in Poland, in cooperation with the Research Institute for Innovative and Preventive Job Design (FIAP e.V., Germany), Channel Crossings (Czech Republic), and PEDAL Consulting (Slovakia), within the framework of the Youth Impact project, financed by the EEA Financial Mechanism and the Norwegian Financial Mechanism. The project seeks to provide tools and services to improve the ability and capacity of Youth Employment and Entrepreneurship Support Actions implementers to efficiently evaluate the impact of their activities. The action will be carried out in the years 2019-2022.

The activities in this project are aimed at developing the evaluation competences of entities that support employment and entrepreneurship of young people.

GLOSSARY OF PROJECT TERMS

 

Activity (of the evaluated project) – actions aimed at a specific target group, which contribute to the achievement of the planned outputs and outcomes, and then to the achievement of the project objectives.

Example: Training 20 young mothers (who were unemployed at the start of the project and had to be supported by social welfare benefits) in dyeing fabrics in town X.

 

Generalisation – referring the findings obtained in the study of the sample to the entire population (i.e. also to units that have not participated in this research). Based on the results of the sample, we conclude – with a given level of probability – that the findings (characteristics / opinions) for the entire population are similar.

 

Impact – the effects of activities, outputs and outcomes of the project, contributing in the long term (apart from possible other projects / interventions and factors) to changes affecting a wider community than the direct recipients of this project.

Example: Improving the living conditions of children raised by women who found a job thanks to the professional competences acquired in the project.

 

Impact indicator – informs about the delayed effects of the project that go beyond its immediate recipients. These effects usually cover the social environment / community of the project beneficiaries and may result from the accumulation of various factors (including non-project activities).

Example: The percentage of project beneficiaries whose household did not have to be supported by social welfare benefits 18 months after the end of the project.

 

Logic matrix of the project – a table used to determine the methodology of measuring selected project elements such as output, outcome or impact. The matrix defines the indicators by which a given element will be measured, the measurement method, and assumptions / conditions of achieving the project’s effects (see chapter 2.1).

 

Logic model of change – a comprehensive tool for project planning and subsequent management of its implementation. It depicts the logic of intervention linking the individual elements of the project with cause-and-effect ties (see chapter 2.1).

 

Monitoring – ongoing collection, analysis and documentation of information during the project implementation concerning the progress of its implementation in relation to the planned schedule of activities and a budget.

 

NEET (not in employment, education or training) – the name of the group, mainly young people, who remain outside the sphere of employment and education, i.e. people who do not study, work or prepare to practice, due to various reasons (discouragement, life crisis, disability, parental or family responsibilities).

Objective (general) – expected state or effects of activities conducted within a project, planned to be achieved within a specified time.

Example: Increasing employment by 2022 among young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X.

 

On the way to achieving the general objective you can have specific objectives (purposes). A specific objective is a planned state that will be achieved as a result of the implementation of certain activities. It should be consistent with the general objective and contribute to its achievement.

Example: Increasing by the end of 2021 the professional competences of young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X to the level expected by employers in this town.

 

Outcome – direct and immediate effects / changes that refer to the beneficiaries as a result of the implementation of specific project activities.

Example: The growth of project beneficiaries’ competences related to dyeing fabrics.

 

Outcome indicator – informs about the degree of the achieved changes related to the project beneficiaries as a result of their participation in project activities and the use of outputs produced at a particular stage of project implementation.

Example: The number of beneficiaries who have acquired the professional skills of dyeing fabrics.

 

Output – a short-term effect of a particular activity in a material form (of a countable nature), e.g. a thing, an object, an event (of service delivery). These may be goods or services transferred to the project recipients, which are to contribute to the achievement of the planned outcomes.

Example: Training materials, certificates confirming the acquisition of professional qualifications in the field of dyeing fabrics by project beneficiaries.

 

Output indicator – informs about the implementation of activities that resulted in measurable products.

Examples: The number of issued certificates confirming the acquisition of specific professional competences, the number of people who have achieved a certain level of these competences, an increase in the level of social competences according to the selected test, the number of cover letters and CVs prepared by the training participants, the number of textbooks prepared.

 

Population – the group of individuals (e.g. specific people, organisations, companies, schools, institutions) that are the subject of the researcher’s research / object of interest.

 

Project (intervention) – a set of activities aimed at producing the intended outputs and outcomes, which, when used by the project’s target group, should bring the planned objectives and impact.

 

Representative sample is a sample that well reflects / represents the studied population and makes it possible to accurately estimate its features through generalisation.

 

Sample selection – selecting from the population cases that will form the sample (smaller part of the population). It is conducted in a specific way (random or non-random) based on the sampling frame, i.e. a compilation (list) of all units forming the population from which the sample is drawn.

I. THE BENEFITS OF EVALUATION

 

There are many ways to understand evaluation. According to the approach applied in the Youth Impact project, the main goal of evaluation is to value the project effects in order to improve them. This assessment is based on evidence that is collected by using social sciences methodology with regard to the change caused by the project.

Our approach largely refers to impact evaluation in its broad sense (later we use the term impact-focused evaluation to underline that we want to embrace not only experimental and quasi-experimental designs). It is an evidence-based reflection on the real (net) effects of a project. It allows you to understand the factors influencing the ongoing and delayed changes and focus on the sustainability of the achieved outcomes as well as the impact of the project that goes beyond its direct participants. This approach to evaluation allows for the formulation of recommendations supporting project management, which contribute to the effective and efficient implementation of its objectives, as well as the organisation’s mission.

Our approach is also a participatory one, taking special care about the needs of various stakeholders and engaging them in planning and other stages of the evaluation.

Such an approach to evaluation makes it possible to determine the value of a particular project and to understand the reasons for its successes and failures. It is also a good management tool for organisations focused on social mission and other “learning” institutions.

 

BENEFITS OF AN EVALUATION DONE WELL:

  • It allows you to predict difficulties before the start of your project (ex-ante evaluation) or notice problems at every stage of its implementation (ongoing or mid-term evaluation), and also allows you to plan actions minimizing identified risks.
  • It gives advice on how to improve an ongoing or completed project to better meet the needs of its recipients, achieve more useful and durable outcomes, have a wider impact and fulfil the planned objectives using fewer resources.
  • It allows you to assess to what extent the expected effects of the project were really caused by the project activities*. Moreover, it makes it easier to decide whether a particular project is worth repeating, disseminating, or could be adapted to a different target group.
  • It increases the motivation of employees – involving the project team in evaluation (especially at the design stage and discussing the evaluation findings), increases the sense of agency, emphasises the relationship between the work performed and the planned goals, the organisation’s mission and employees’ own values.
  • It increases the competences of employees – from issues related to project management to knowledge of the mechanisms of the changes caused by this project.
  • It increases the level of confidence and cooperation with project partners (also in future projects), thanks to taking into account the perspective of external stakeholders.
  • It makes it possible to demonstrate the achieved results and improves cooperation with grant-giving institutions and sponsors, encouraging them to finance subsequent projects.
Example: When applying for a grant or justifying the need for a project, you can quote the evaluation findings concerning a previous, similar project. Providing reliable data may help you convince funders that your project is worth funding.
  • It serves to promote your organisation.
Example: Evaluation findings, including case studies, can be used on social media to promote the organisation’s activities. These could be stories of young people who, thanks to your support, acquired new competences and then found a satisfying job or successfully run their own business.

 

Overall, evaluation has many benefits. Introducing it to everyday work can be a very useful support for managing an organisation – strengthening credibility and improving its image, educating and motivating staff, raising funds by showing evidence of project impact, and above all, the effective fulfilment of the assigned mission.


* This possibility is provided by impact evaluation, which is described in chapter 2.4.

II. PREPARING FOR THE EVALUATION

 

“You can’t do “good” evaluation if you have a poorly planned program”. (Beverly Anderson Parsons,1999)

 

2.1. What you need to know about the project to plan its evaluation?

In the toolkit, we concentrate on impact-focused evaluation. We present practical ways of conducting such evaluation regarding primarily the effects of project activities in terms of an intended change. The subject of our interest are the effects of project activities (outputs, outcomes, impact) and their compliance with the project theory of change (or project theory). The project theory defines the concept of the intended change and plan of the project, including its objectives, activities, expected outputs, outcomes and impact, as well as the way in which they will be measured, and what resources are needed to achieve these effects.

The basic element of the project theory is the logic model of change that compiles information on what the project running organisation needs to accumulate (inputs / resources), the work it needs to do (project activities), and the effects it intends to achieve. The logic model of change for a given project is developed according to the following scheme.

 

Diagram 1: Basic logic model of change

The methods of measuring the project outcomes and the related assumptions are sometimes specified in a separate table called the project logic matrix. The logic model and logic matrix should be part of the project documentation.

In practice, it happens that the logic matrix or even the logic model of change have not been developed or are very selective. A lack of assumptions indicating how you define the success of the project makes it impossible to evaluate it and thus verify whether the planned change took place as well as whether it occurred as a result of the project activities.

 

What to do if there is no logic model of change in the project documentation?

In such a situation, it is necessary to recreate the logic of change behind the project, e.g. based on interviews with the management and project staff, as well as already existing documents such as strategy / project implementation plan, justification for its implementation, application for co-financing, partnership agreement, etc. The following table may help you to reconstruct the logic of the project.

Tool 1: (Re)Construction of Project Logic

The above tabulation of the logic of the project allows you to reflect on the ways of demonstrating the level of achieved effects (outputs, outcomes and impact). This goal is served by defining the indicators by which you will measure the progress of the project. An indicator is an observable attribute (feature) that enables the phenomenon to be measured. Each indicator has a measure (quantitative or qualitative) which informs about the degree / intensity of the occurrence of this phenomenon. In order to measure the change that has occurred as a result of the project implementation, you should determine the values (level) ​​of a given indicator at the beginning and at the end of the project, i.e. the baseline value and the final value. It is also good to know what the minimum required value of the final output indicator is, if such a value was defined at the beginning of the project. More information on indicators can be found in the online course (Module 3).

Tool 2: Table Of Indicators Of Project Effects

 

More examples of targeting indicators in youth employment projects can be found in the Guide on Measuring Decent Jobs for Youth. Monitoring, evaluation and learning in labour market programmes, NOTE 3. ESTABLISHING A MONITORING SYSTEM, p.6-9.

 

2.2. When to start developing an evaluation concept and plan?

It is worth developing the concept of evaluation before starting the project or even during its planning, because it allows you:

  • To initiate an in-depth reflection on the logic and coherence of project activities, their translation into project objectives, as well as factors facilitating and hindering their achievement;
  • To plan in advance the collection of information (data) that enables evaluation questions to be answered (e.g. without the baseline measurement of the level of knowledge and skills of the recipients of the training (before this activity), it will be impossible to reliably demonstrate the change that has been obtained, i.e. an increase in competences, which should take place as a result of this training);
  • To find appropriate funds to conduct the evaluation and to enter into the schedule of project activities that will help to collect relevant data, analyse them and report them;
  • To plan the collection of information in the most efficient way (the cheapest, fastest, easiest) during or after the implementation of project activities.

It is worth remembering that evaluation is a multi-stage process that must be designed and planned well, and then implemented step by step.

Stages of the evaluation process

  1. Diagnosis of evaluation needs
  2. Conceptualisation and planning
  3. Information collection – research implementation
  4. Data analysis and inference
  5. Reporting
  6. Using evaluation results – implementation of recommendations

 

2.3. How to diagnose the evaluation needs of the project stakeholders

Conceptualization and planning of evaluation should not start without identifying who needs the information, conclusions and recommendations from the evaluation and for what purpose. It is good to begin the diagnosis of evaluation needs with the stakeholders of the project to be evaluated.

Project stakeholders are people / entities (institutions, organisations) involved in various ways in the implementation of a particular project, e.g. its beneficiaries, project team, staff implementing project activities (e.g. trainers, psychologists, career advisors), project partners (cooperating organisations or institutions), sponsors / funders, etc.

The participation of project stakeholders in the evaluation is very important as they are potential allies of the evaluator. They can support the entire evaluation process, including the implementation of recommendations that improve the project. Thanks to the involvement of various stakeholders in the evaluation activities, it is possible not only to improve communication and cooperation with partners, beneficiaries and project staff, but also to convince funders to invest in the project currently being implemented or its next edition. If the stakeholders are interested in the project evaluation then conducting the evaluation in a participatory manner – involving the stakeholders in the entire evaluation process, starting with the diagnosis of evaluation needs – should be much easier.

The best way to diagnose evaluation needs while ensuring a high level of stakeholder participation is to conduct a workshop / group interview with representatives of all entities (organisations, institutions) and groups of people involved in a particular project.

If the recipients of the project are young people (e.g. NEETs) or other group who may have concerns about expressing their opinions in public, you should first hold a separate meeting with these beneficiaries and then invite their representatives to participate in a workshop with other stakeholders. This type of workshop with young people or other project recipients with a relatively weak social position should be based on values strengthening the subjectivity of the project beneficiaries (see the example from Participatory evaluation with young people, p. 7-8).

 

Example Of Workshop With Stakeholders

 

The information gathered during the workshop with the participation of stakeholders should be used to prepare the evaluation concept and plan (see chapter 2.4). Therefore, it is worth summarising the key findings of the diagnosis of stakeholder needs in the two tables below.

Tool 3: Summary Of The Diagnosis Of The Project Stakeholders’ Evaluation Needs

 

Information on the expectations of individual stakeholders regarding the form of presentation and ways of using evaluation results will be useful in the planning phase of their dissemination (see Chapter 6.3.)

 

2.4.How to design and plan the evaluation

The information collected during the workshop with the stakeholders will be used to prepare the concept and plan of the evaluation. The concept of evaluation, i.e. an idea on how to carry it out, can be prepared in 3 steps.

Diagram 2: Evaluation Concept

The first and second steps include the following:

  • Subject of evaluation – what do you want to evaluate (e.g. which project or programme),
  • Scope of evaluation – what part of the project will be included in the evaluation, e.g. the entire project or selected elements – particular activities, effects,
  • Purpose(s) of the evaluation – what are you conducting it for, what will you use the evaluation findings for,
  • Type of evaluation – at what stage of the project implementation will you conduct the evaluation; before the commencement of project activities (ex-ante evaluation), during their implementation (mid-term or on-going evaluation), after completing the project (ex-post evaluation),
  • Evaluation criteria – features indicating in what respect the project is being evaluated (e.g. relevance, effectiveness, efficiency, utility, impact, sustainability),
  • Evaluation questions – generally formulated questions regarding issues that are important in terms of assessing the value and quality of the evaluated project,
  • Evaluator – who will perform the evaluation, e.g. a team implementing the project (self-evaluation), an evaluation specialist employed by the organisation implementing the project (internal evaluation) or an external entity contracted by it (external evaluation)*.

*The strengths and weaknesses of different types of evaluation selected due to the location of the evaluator are discussed in the online course (Module 2).

You can present this information in a table showing your evaluation concept. An example of such a table and its application to a specific project are presented below.

Tool 4: Evaluation Concept Table

 

The third stage of developing an evaluation concept requires knowledge of the various research methods and tools presented in Chapter III – Data Collection. For this reason, part of the evaluation planning related to the methodology of collecting data for evaluation is presented in Section 3.3 (an example of this stage of evaluation design is presented in Tool 6).

Information on the availability of the necessary data, as well as the possibility of obtaining support from respective stakeholders, will be used when planning the evaluation process and estimating the resources necessary to carry it out. The evaluation plan should include such elements as: its schedule (with respective stages), resources necessary to conduct the evaluation (human, time, financial, information), as well as the planned form(s) of the evaluation report.

You can present this information in an evaluation planning table. An example of such a table together with how it is applied to a specific project is presented below.

Tool 5: Evaluation Planning Table

 

As you can see in the table above, information is one of the key assets that must be provided to conduct evaluation and there are plenty of data sources which can be useful for this purpose. In the context of youth employment projects one of the most important areas of the progress intended in the projects are general and vocational competences. The default source of the information on the initial and final level of such skills among project beneficiaries should be the trainers of these competences. Therefore, you should cooperate with the trainers on gathering and using the data concerning the competences level before and after the training.The measurement should use multilateral perspectives on the skills of trainees (the trainer’s perspective, self-assessment of the trainee and psychometric test) and be coherent and relevant to the content of the training. You can find an example of such tools sets in the attachments of this Toolkit. It concerns 8 key competences “needed for personal fulfilment and development, active citizenship, social inclusion and employment” mentioned in Recommendation 2006/962/EC of the European Parliament and of the Council on key competences for lifelong learning*.


*The Recommendation 2006/962/EC of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning refers to the following skills: 1) communication in the mother tongue, 2) communication in foreign languages, 3) mathematical competence and basic competences in science and technology, 4) digital competence; 5) learning to learn, 6) social and civic competences, 7) sense of initiative and entrepreneurship, 8) cultural awareness and expression.

 

2.5. How to design impact evaluation

The key distinguishing feature of impact evaluation is the fact that the assessment of project effects takes into account not only the impact of activities carried out in the project and the outputs produced but also the influence of external (non-project) factors. To evaluate the real (net) impact of the project it is necessary to plan and conduct the evaluation in a way that makes it possible to determine if the implementation of the project caused the intended exchange, and to what extent it was influenced by non-project factors.

Conducting an impact evaluation allows you to collect various types of information that are very useful for project development:

1) data on the actual impact of the project on achieving the expected change is the key information for deciding whether to repeat, duplicate, improve or discontinue the project because:

a) non-project factors could have contributed to the change intended in the project, so that the (net) impact of the evaluated project may be lower than indicated by the difference between the final value of the outcome indicator and its baseline value (measurement at the beginning of the project).

b) external factors could counteract the change expected in the project, so that the (net) impact of the evaluated project may be greater than the difference between the final and the baseline value of the outcome indicator,

2) information on the diversity and mechanisms of the impact of individual elements of the project on achieving the expected change is very helpful in improving the project,

3) identifying major external factors and the mechanisms of their impact on the intended change can be used to modify project activities so that they better concur with the processes supporting the change and better cope with opposing factors.

Depending on which of these issues is a priority in the evaluation of a particular project, but also depending on the feasibility of obtaining relevant data, different models (design) of impact evaluation are used along with data collection methods adapted to them.

Table 1. Different design approaches for impact evaluation.

Source: Emily Woodhouse, Emiel de Lange, Eleanor J Milner-Gulland. Evaluating the impacts of conservation interventions on human wellbeing: Guidance for practitioners.

Experimental and quasi-experimental evaluation designs are used to determine what portion of the intended change in a project can be attributed to the project activities (net impact). The measure of the impact of project activities is the difference between the measurement of the indicator before and after the end of the project in the group of its recipients (change in the test group, participating in project activities) after adjusting it for the impact of non-project factors. The impact of non-project factors is estimated on the basis of measuring the change of outcome indicator in a group of people who did not participate in the project and are as similar as possible to the project recipients.

  • In experimental designs (called also RCT – Random Controlled Trials), people are randomly assigned to the group of beneficiaries of the project (test group) or to the group not covered by the project (control group). Random selection to both groups helps to ensure that the two groups do not differ from each other*. Thus, changes in the measured indicators in the control group can be attributed only to external factors, and in the test group – to the combined influence of external factors and the project’s activities.
  • In quasi-experimental designs, there is no random selection of groups. For a test group which took part in the evaluated project, a control group is selected using non-random methods, but still providing it is as similar to the test group as is possible and performs similar functions as the control group in experimental models.

*When the test group or control group is small, structured random selection should be used (instead of simple random selection) to make sure that the two groups have similar structure according to features which can affect the intended outcome of the project (e.g. the structure of educational attainment level should be similar in the control and test groups otherwise the more educated group can make better progress in achieving skills which are to be developed in the project under evaluation).

 

In order to apply experimental or quasi-experimental designs, evaluation activities must be coordinated with the evaluated project activities and therefore need to be planned before they are implemented. For example, when you expect surplus candidates for project beneficiaries or when the project will be implemented in several editions and you can organise joint recruitment, you can do the group assignment using random sampling. This way you can get a randomly selected test group (to be immediately involved in the project activities) and the control group (the people not selected for the current edition of the project). Just after selecting the groups the baseline measurement should be conducted (and final measurement in both groups after the project has been completed).

If the beneficiaries of your project are chosen by an external institution (e.g. Labour Office), it is also worth checking what selection procedure is used there. If this procedure gives the opportunity to select a control group or comparison group in which the project outcome indicator can be measured, verify it and plan the measurement in this group at more or less the same time as it is carried out in the evaluated project.

An important aspect of impact evaluation is the control of what is known as the spillover effect, which is the spread of the impact of project activities outside the test group, in particular to people in the control or comparison group. The risk of the spillover effect is greater the more contact the recipients of the evaluated project have with the people from the control or comparative group. Another aspect influencing the scope of the spillover effect is the level of demand on solutions provided by the evaluated project.

Planning and interpreting an impact-focused evaluation requires the use of the project theory to examine the consistency of the evaluation findings with the project logic (of change) and to verify the impact of the alternative factors. Examining the consistency of facts with the project logic focuses on identifying evidence confirming a cause-and-effect relationship as well as data, which confirm these relationships. In this approach, it is crucial to plan as early as possible what kind of data should be collected during the project in order to verify:

– the cause-and-effect relationship between activities, outputs, intermediate and final effects (outcomes, impacts) that make up the project logic of change,

– achievement of successive stages in the cause-effect chain of intermediate effects leading to the outcomes measured by the final indicator (the milestones).

Assessment of the impact of alternative factors is based on similar planning and verification of factors of change other than project activities expected as the results of the project.

If the evaluated project is a part of a larger programme carried out in different locations or by different organisations, this may provide an opportunity to obtain comparative data that will be used in the impact evaluation based on case study analyses. To use the case-based evaluation design, you should collect information not only about the outcome indicator that you measure in the evaluated project, but also about all important factors that may affect the value of this indicator. The set of such factors should be determined on the basis of the project theory, taking into account the different elements which may influence the intended change.

It is worth remembering that in this model it is possible to use information about projects implemented in the past. Regardless of where the analysed cases come from, it is important to obtain a predetermined set of information from them. The final analysis is based on a table that summarises the data from all analysed cases concerning the occurrence of the factors that may affect the intended change of the outcome indicator and, of course, the outcome indicator itself.

Table for summing up the findings from case studies analysis – practical example

In the table above you can see summarised information on 4 cases where the outcome (having a job or being in education or training 1 year after the project completion) was monitored against three factors. Two of them were different project stimuli (extensive training in social competences and vocational training) while the third one was external – supported employment for six months right after the end of the project). The analysis showed that it was the extensive training in social competences which caused the intended outcome.

 

Participatory design is an underrated but popular model of impact-focused evaluation. It does not guarantee as much reliability and precision as experimental or quasi-experimental designs, nor is it as convincing as a strict case study analysis but it can still be useful, especially in small projects. In participatory design, you refer to the perceptions of the participants in the evaluated project and, on the basis of the data obtained from them, you evaluate the impact of the project. Thus, the methodology of collecting data is of great importance because the project beneficiaries tend to adjust their opinions to what they think the researcher might want to hear, especially if data collection is conducted by someone from the project staff.

  • One of the participatory evaluation designs is called Reflexive counterfactuals. Its advantage is that it can be used after the end of the project. On the other hand, it is exposed to the previously described risks, such as influence from the researcher. As part of reflexive counterfactuals, the beneficiaries are asked to compare their current situation with their situation before they participated in the project and to describe what has changed for better and for worse. Then, they rate the relevant importance of particular benefits and costs to select the ones which were considered to be the most important. Using different research techniques, it is also possible to ask about the causes of particular changes and find out which of them were associated with the project.
  • Another technique for participatory impact analysis is MSC (Most Significant Changes). It is based on the generation and in-depth analysis of the most significant stories of change in the lives of project beneficiaries. These stories of change were observed and noted by various project stakeholders (including the beneficiaries themselves). The properties of this research technique allow it to be used after the end of the project.

Finally, the possibility of conducting an impact evaluation based on statistical methods should also be mentioned. The basis here is the analysis of the correlation (coexistence) of the outcome indicator and the activities undertaken in the evaluated project*. Such analyses are performed on large data sets, which makes this type of evaluation of little use for organisations running projects for a relatively small group of recipients**.

More information on impact-focused evaluation can be found in the online course (Module 3).


* In such analyses, the basic method of analysis is regression, in which the strength of the relationship between the result indicator and the indicators of actions carried out within the evaluated project is examined, with statistical control (exclusion) of the impact of confounding factors.
** The problem that hinders the use of statistical methods of impact evaluation by small and medium-sized organisations is, apart from the scale of the projects, the need to use advanced statistical software and qualified analysts.

III. DATA COLLECTION

 

3.1. What are the major types of evaluation research methods?

In order to estimate the value and quality of the project in relation to the chosen criteria and answer the evaluation questions, you should correctly collect the necessary information. Research methods and tools serve this purpose. Research methods mean a specific way of collecting information – qualitative or quantitative – with the use of specially developed tools, such as interview scenarios, observation sheets or questionnaires. Let’s look the differences between these methods and research tools.

Qualitative methods enable the collection of data in an in-depth and flexible manner, but they do not allow you to assess the scale of the studied phenomena as these methods cover only a small number of people from the groups involved in the project (e.g. selected recipients). On the contrary, quantitative methods are used in the case of large groups that consist of several dozen people. In the case of more numerous groups (e.g. more than 400-500 people) these methods enable the generalisation of conclusions drawn from the survey of a representative, randomly selected sample of people for the entire population, i.e. the community that is of interest to the researcher, including people who did not participate directly in the particular study. This generalisation must be carried out in a specific way that will ensure that the sample of people subjected to the study is representative, i.e. maximum similarity in various socio-demographic characteristics to the population from which they were selected.

Comparison Of Qualitative And Quantitative Methods Of Evaluation Research

 

Both of these types of methods have some strengths and weaknesses, therefore you should always use both qualitative and quantitative methods in the evaluation study. This approach is in line with the triangulation principle aimed at ensuring the high quality of the information collected. Triangulation means using various sources of information, types of collected data and analytical techniques, theories explaining the identified relationships / mechanisms, as well as people conducting the evaluation (whose competences should complement each other). Providing diversity of this elements triangulation enables:

  • comprehensive knowledge and understanding of the studied object,
  • taking into account various points of view and aspects of the phenomenon studied,
  • supplementing and deepening the collected data,
  • verification of collected information,
  • increasing the objectivity of formulated conclusions.

 

3.2. What methods and tools are typically used in evaluation research?

To facilitate the choice of methods and tools most appropriate for a particular evaluation, below are the characteristics of the most popular of them:

  1. Qualitative methods
    • desk research,
    • individual in-depth interviews (IDI),
    • focus group interviews (FGI),
    • observation,
    • case study.
  1. Quantitative methods (surveys)
    • survey conducted without the participation of an interviewer – self-administered paper surveys, computer-aided web interview / online survey (CAWI), central location (simultaneously surveying all respondents),
    • questionnaire interviews conducted with the support of a pollster – paper and pen interview (PAPI), computer-assisted personal interview (CAPI) and computer-aided telephone interview (CATI).
  1. Active / workshop methods (mixed, i.e. qualitative and quantitative).

 

3.2.1. DESK RESEARCH

In the case of desk research existing data is used, i.e. data that was generated regardless of the actions taken by the evaluator.

The existing data includes internal data (generated for the needs of the evaluated project) and external data:

  • Internal data is information created during the preparation and implementation of project activities (e.g. project application, training scenarios, attendance lists, contracts, photos, videos and materials about the project posted on the website, posts and responses on social media). In the case of training projects for young people looking for a job, these may also be the results of measuring the competences of the beneficiaries at the beginning and at the end of participation in the training (knowledge tests, skills tests, attitudes tests, etc.)
  • External data is information that may relate to the studied phenomenon, processes or target group, but has been collected independently of the evaluated project (e.g. statistics, data repositories, reports, articles, books, videos, and other materials available on the Internet). In the case of the evaluation of employment projects, it is worth using information on similar projects, as well as data available to labour offices, social insurance institutions, national statistical offices, regarding the employment of young people living in a particular town.

Documentation analysis is the basic method of collecting information on a given project, also providing some knowledge about the needs of its recipients and the context of the evaluated project.

 

CONDITIONS OF APPLICATION:

Public institutions provide administrative data in accordance with the principle of transparency in the operation of public institutions and civic participation (open government concept). However, it is important to assess the data reliability and accuracy based on the methodological information provided in the source documentation.

 

ADVANTAGES:

  • accessibility (especially regarding information available on the internet),
  • large variety (you can use any data / materials related to the conducted evaluation),
  • no costs – most documents and data are available free of charge,
  • no evaluator’s effect on data in the case of external data.

DISADVANTAGES:

  • different levels of data credibility – you need to take into account the credibility of the source and the context of data acquisition (under what conditions, who collected and analysed the data and why),
  • restrictions on the access and use of internal information due to the protection of personal data, copyright and property rights.


3.2.2. INDIVIDUAL IN-DEPTH INTERVIEW (IDI)

An individual interview takes the form of a direct conversation between the interviewer and the respondent, usually conducted using a scenario. The interview allows you to obtain extensive, insightful and in-depth information, get to know opinions, experiences, interpretations and motives of the interviewee’s behaviour, examine facts from the interviewee’s perspective, as well as gaining a better understanding of their views.

IMPORTANT TIP

The language of the interview should be adapted to the respondent. In interviews (especially with young people) use simple language and avoid specialistic vocabulary (e.g. project jargon), that may cause misunderstanding of the questions asked and intimidate the interviewees.

 

CONDITIONS OF APPLICATION: Individual interviews should be conducted in quiet rooms that guarantee discretion. Interview recording is a common practice, but the respondent does not always agree – in such cases the researcher should take notes during the interview and complete them immediately after the meeting. It is recommended that the interview be conducted by an external expert to avoid situations in which the interviewee feels uncomfortable expressing honest opinions.

 

ADVANTAGES:

  • the possibility to discuss complex and detailed issues,
  • better understanding of the interviewee’s point of view (“getting into his/her shoes”),
  • getting to know facts in the situational context,
  • flexibility – the possibility to adapt to the interviewee and to ask additional questions not included in the scenario.

DISADVANTAGES:

  • unwillingness of some interviewees to express honest opinions due to lack of anonymity,
  • the impact of the interviewee’s personality traits on the findings obtained, e.g. difficulty in obtaining information from people who are taciturn, shy or introvert.

RESEARCH TOOL: the interview may be supported by an interview scenario, containing a list of questions or issues to be discussed. The interviewer can change the order of questions or add some questions during an interview if it is needed to better understand the issue.

Examle Of IDI Scenario For The Project Team

Individual IDI Scenario

 

3.2.3. FOCUS GROUP INTERVIEW (FGI)

A focus group is a conversation between about 6-8 people supported by a moderator who gives the group issues for discussion and facilitates its course. FGI participants are selected according to specific assumptions set by the researcher and their knowledge of studied issues.

IMPORTANT

In the case of young people, the discussion should be divided into shorter forms, involving all the participants, so that they do not get bored too quickly. It is worth using multimedia tools, elements of gamification or non-standard solutions, e.g. a paper cube with questions, thrown by the participants themselves. It is helpful to write down a group’s opinions on a flipchart and record the group discussion.

 

CONDITIONS OF APPLICATION: The basic condition for the success of a group interview is correctly selecting people with specific information that they are ready to share. It is important to guarantee that the participants are comfortable by organising the interview in a quiet room of the right size with comfortable seating, a large oval / square table and a flip chart.

 

ADVANTAGES:

  • learning about different points of view, taking into account different opinions,
  • mutual verification and supplementation of information about the facts discussed by different persons,
  • the opportunity to observe interactions between participants,
  • obtaining relevant information from several people in a relatively short time.

DISADVANTAGES:

  • dynamics of group processes, including pressure on group consensus / cohesion, may lead to minority opinions not being disclosed, e.g. due to the group being dominated by a natural peer group leader,
  • risk of transferring to group conflicts or bad interpersonal relations, reducing the effectiveness of the research and the reliability of the findings obtained,
  • organisational difficulties (the need to gather a group of people at a particular place and time and to provide a properly equipped room)*.

RESEARCH TOOL: the tool used by the moderator for this method is an FGI scenario, which includes the principles of group discussion, specific issues / questions and guidelines regarding various forms of activity in which the moderator is to involve the participants.

FGI Scenario


*Both IDIs and FGIs can be conducted by remote means using online communicators.

 

3.2.4. OBSERVATION

This method is based on careful observation and listening to the studied objects and situations (phenomena, events). The observation may be participant, partially participant or non-participant, depending on the degree of involvement of the researcher, who may act as an active participant in the events he or she observes or as an external, uninvolved observer. The observation can be carried out in an overt, partially overt or covert way*, i.e. the participants of the event may know that they are being watched or selected persons (e.g. trainer and / or training organiser) or only an observer know about it.

 

CONDITIONS OF APPLICATION: if the observation is non-participant, the observer should not come into contact / relations with the people being observed as this carries the risk of affecting the course of the observed events and behaviours.

 

ADVANTAGES:

  • providing information about a particular event / process during its course,
  • reporting facts without their interpretation by the participants (examination of actual behaviour, not declarations,
  • facilitating the interpretation of investigated events,
  • the opportunity to learn about phenomena usually hidden or unnoticeable or that people are reluctant to discuss.

DISADVANTAGES:

  • possible influence of the researcher on the course of events (the respondents’ awareness that they are being observed may change their behaviour),
  • limited scope of observation range, difficulty in accessing all events,
  • the risk of subjectivity (the researcher may assume some stereotypes, perceive and interpret events for the benefit of the observed group).

RESEARCH TOOL: The observation may be conducted using a research tool which is the observation sheet. Its use focuses the observer’s attention on selected issues and enables the recording of important information (e.g., the behaviour of people participating in the observed events), which may be not only qualitative, but also quantitative (the checklist).

Training Observation Sheet


* With regard to evaluation studies, we do not recommend covert observation, i.e. one that is not known to the people who are its subject.

 

3.2.5. CASE STUDY

This is an in-depth analysis of the studied issue using information from different sources and collected by various methods. Its findings can be presented in a narrative form. The analysed “case” could be a person, group of people, specific activities, a project or a group of projects.

The case study is used to:

  • get to know thoroughly and understand a particular phenomenon along with its context, causes and consequences,
  • illustrate a specific issue using a realistic example with a detailed description,
  • generate hypotheses for further research,
  • present and analyse best / worst practices to show what is worth doing and what should not be done.

CONDITIONS OF APPLICATION: This method requires time to collect and analyse various data regarding the phenomenon / object being studied, its context, processes, and mechanisms. Case studies are best used as a complementary method to other research methods.

 

ADVANTAGES:

  • is a source of comprehensive information on a given topic,
  • uses different points of view, which gives the description and analysis a wider perspective,
  • takes into account the context of the phenomena studied.

DISADVANTAGES:

  • usually requires the use of various sources of information, sometimes difficult to access,
  • it requires a lot of work and is time-consuming,
  • provides incomplete data results with low credibility of the described case.

 

3.2.6. SURVEYS CONDUCTED BY INTERVIEWERS

Quantitative methods are a standardised measurement method. Standardisation enables the collection and counting of quantitative data in a unified way, and also enables their statistical analysis. Standardisation covers:

  • Research tool (interview questionnaire) – the order, content and form of questions put to respondents,
  • The manner of recording respondents’ responses by selecting one option (on the scale) or several options from the “cafeteria” (a set of ready answers),
  • Behaviour of interviewers (pollsters) who are obliged to follow the instructions contained in the questionnaire during the interview.

Respondents’ opinions are transformed into numbers and saved in the database. Then, this information is analysed using statistical methods.

Questionnaire interviews are conducted by trained pollsters who read the respondents’ questions from the questionnaire and write down the answers that were obtained. There are the following techniques for this type of research:

  • Paper and Pencil Interview – (PAPI),
  • Computer-Assisted Personal Interview – (CAPI),
  • Computer-Aided Telephone Interview (CATI).

 

3.2.6.1. Paper And Pencil Interview (PAPI) and Computer-Assisted Personal Interview (CAPI)

Both of these techniques are field-based and are implemented in direct contact of the respondent with the pollster using a paper (PAPI) or electronic version of the interview questionnaire displayed on a laptop or tablet (CAPI). The pollsters read out the questions included in the questionnaire and then mark the answers given by the respondent.

 

CONDITIONS OF APPLICATION: a wide range of topics and a direct (F2F) meeting between the interviewer and the respondent is required. The best place for the interview is a place isolated from noise and the presence of third parties (in home / work conditions, make sure that bystanders, such as family members or colleagues, do not influence the respondents’ answers).

 

ADVANTAGES:

  • personal, close contact with respondents (the possibility to observe non-verbal signals, respond to misunderstanding of the question or tiredness of the respondent),
  • greater readiness of respondents for a longer interview and more difficult questions than during CATI,
  • with CAPI data is automatically saved during the interview.

DISADVANTAGES:

  • higher costs, including time and cost of travel and arranging a personal meeting with the respondent,
  • lack of a sense of anonymity of the respondent,
  • uncontrolled influence of the pollsters on the respondent’s answers (the interviewer’s effect*)
  • With PAPI the interviewer must manually enter the data from the questionnaire into the database after the interview, which is time-consuming, adds costs, and involves the risk of mistakes.

* This is the influence that the interviewer exerts on the respondent during the survey. The respondent unconsciously interprets the interviewer’s social characteristics (e.g. gender, age), assuming what is expected of him/her. The interviewer may also unknowingly send signals to the respondent suggesting the “right” answers.

 

3.2.6.2. Computer-Assisted Telephone Interview (CATI)

This type of interview is carried out by phone. The interviewer reads the questions displayed on the computer screen, and after receiving the answers marks them in the electronic questionnaire on his/her computer.

 

CONDITIONS OF APPLICATION: studying established opinions and attitudes, with the use of questions that do not require longer reflection due to the short duration of this interview (max. 10-15 minutes), as well as a specific channel of transmission and reception of information (no possibility of reading it several times at own pace).

 

ADVANTAGES:

  • shorter time and lower cost of reaching the respondent compared to face-to-face interviews (PAPI, CAPI),
  • time flexibility (the possibility to adjust the interview time to the respondent’s preferences, to stop the interview and continue it at a convenient time for the respondent),
  • easy management and control of pollsters’ work,
  • automatic saving (coding) of data during the interview.

DISADVANTAGES:

  • possible difficulty in obtaining respondents’ phone numbers (due to the lack of access and / or protection of personal data), and in the case of employers, no personalised contacts (having only the reception / headquarters phone numbers),
  • interview time limited to 10-15 minutes (due to shaky concentration and short duration of the respondents’ involvement),
  • the tendency of the respondents to choose extreme answers, or the beginning and end points on the scale (resulting from a specific channel of information transfer which enhances the ‘priority effect’ and the ‘freshness effect’).

 

3.2.7. SELF-ADMINISTERED SURVEYS

In self-administered surveys, the respondents read and mark the answers in the questionnaire on their own (without the pollsters’ participation).

 

CONDITIONS OF APPLICATION: these surveys can be carried out as a paper or online questionnaire (i.e. Computer-Assisted Web Interview – CAWI). In the case of the latter, respondents receive a link to the website with the questionnaire which they can complete on a computer, tablet or smartphone. After answering, the data is sent to the server where it is automatically saved in the database.

A very effective method of collecting quantitative data is a central location, which relies on questionnaires being filled in by people who are at the same time in one room, e.g. after completion of a training, workshop or conference. It is necessary to ensure that the respondents fill in the questionnaires themselves (without support from other people).

 

ADVANTAGES:

  • short time it takes to obtain information (especially in the case of a central location),
  • lower cost compared to questionnaire interviews conducted by pollsters,
  • sense of anonymity in people completing the survey,
  • no interviewer’s effect .

DISADVANTAGES:

  • respondents’ motivation to complete the questionnaire may decrease with no interviewer presence,
  • lack of control over the process of completing the survey*,
  • risk of consulting responses with other people**.

PRACTICAL TIP

The survey questionnaire must:

  • be short, easy, visually attractive to encourage a response,
  • have all necessary explanations, which in other methods are given by the interviewer,
  • have clear instructions (paper version) or algorithms (electronic) leading the respondent to the relevant questions (based on previous answers, irrelevant questions are filtered and omitted).

 

Questionnaire For Training Participants


* Instead of the right respondent, the survey may be completed by another person, which disrupts the representativeness of the sample.
** Especially in the case of a central location conducted without the researcher’s supervision.

 

3.2.8. ACTIVE / WORKSHOP METHODS OF GROUP WORK WITH YOUNG PEOPLE

 

Below we present additional active methods of collecting data (mainly qualitative), which can be particularly useful in group work with young people, because these methods are engaging, they integrate the team, facilitate cooperation and support the development of soft skills.

Active methods are workshop methods of collecting information that can complement the “classic” methods of evaluation research. They allow you to get quick feedback on a particular action, learn about the ratings, feelings and impressions of the participants as well as develop recommendations. These methods are worth using during workshops, training or conferences, in order to make the meeting more attractive, get to know the participants and better adapt the project activities to their needs.

 

ADVANTAGES:

  • speed – you receive instant feedback during the classes / meetings,
  • casual atmosphere,
  • the projective nature of tasks / questions makes it easier to formulate critical opinions and propose new solutions,
  • possibility to jointly collect qualitative and quantitative data,
  • stimulating self-reflection,
  • a positive impact on the well-being of participants (satisfying the need for expression, acceptance, integration).

DISADVANTAGES:

  • you cannot generalise the obtained opinions to a wider community (not participating in the meeting),
  • the need for an experienced trainer / moderator to moderate / facilitate,
  • the lack of anonymity of the participants in the case of group reporting and discussion (threat to mental well-being and group relations for people who are particularly vulnerable or have a weak position in the group).

Below you can find examples of active methods implemented in the form of a workshop.

 

CLOTHESLINE

The purpose of this tool is to get to know the expectations of the project audience. It is a visual method of collecting qualitative data.

Each participant receives drawings with clothes (e.g. shirt, underwear, trousers, socks), which symbolise the type of expectations they have towards the project – they may be, for example, hopes, fears, needs, suggestions, etc. Participants are given sufficient time to reflect and complete individual drawings / garments. After writing down their ideas, each of them “hangs their clothes” on a string hung or drawn in the room. Participants can read their expectations aloud and look at others’ “laundry”.

 

TELEGRAM

This tool allows you to quickly summarise part of the meeting (workshop, training) to learn about the mood in the group.

The participants are asked to think about a particular fragment of the classes and describe their reflections with three words: positive, negative and summative (e.g. intense – tiredness – satisfaction). Each person reads their words, which allows for a joint summary of the activities (you can write them down on post-its and stick them on a flipchart, etc.).

 

HANDS

The purpose of this tool is to find out opinions on selected aspects of the project or part of it (e.g. training, internship), as well as to summarise the course and effects of the classes. People participating in the workshop receive sheets of paper on which to draw their hands. Each of the fingers is assigned one assessment category, e.g.:

  • On the thumb – what was the strongest / best side of the training / project,
  • On the index finger – what I will tell my friends about,
  • On the middle finger – what was the weakest point of the training / project,
  • On the ring finger – what I would like to change (element needing improvement),
  • On the little finger – what I have learned or found out.

Participants enter their opinions on each of the fingers in accordance with the above categories. The exercise can be used to find out about the opinions of individuals and / or for group discussion.

 

EVALUATION ROSE

This method is used to gather feedback on many aspects of a project / activity at the same time. It is a visual method that allows you to collect quantitative data – assessments of various aspects of the assessed object using a joint scale.

Participants receive cards with an “evaluation rose” drawn. The drawing is inspired by the “wind rose” – instead of the directions of the world, it presents various aspects of the evaluated object (e.g. the usefulness of the training, how attractive the method of conveying the content is, the appropriate amount of time spent on training). Divide the axes into sections and assign to them selected values (e.g. scale 1-5, where 1 is the weakest grade and 5 – the best). Participants are asked to indicate their views on each axis of the “evaluation rose”. Then you can combine the points and get a visually attractive picture of your opinions (the final effect resembles a radar chart).

 

TALKING WALL

The purpose of this method is to gather opinions on the value of a particular project activity or the entire project. Thanks to its application, you can obtain qualitative data (types of opinions) and quantitative data (how many people share a particular opinion).

Hang five large sheets of paper on the wall. On each of them, put a question about the conducted activities, e.g.:

  • Sheet 1: What new things did you learn during the training?
  • Sheet 2: How will you use the knowledge acquired during the training?
  • Sheet 3: What did you like the most about the training?
  • Sheet 4: What did you like least about the training?
  • Sheet 5: What would you change in this training?

Participants write down their answers on each sheet or – if the opinion is already on them – add a plus / dot next to it. At the end, the facilitator summarises the entries and encourages the group to discuss them and develop their recommendations. This form of collecting opinions encourages more openness, participants gain a sense of agency and overcome reluctance to speaking in public.

 

RUBBISH BIN AND SUITCASE

With this method, you can get a summary of training or other project activity. It allows you to collect information on elements that were useful, redundant or considered missing for the participants.

Draw a suitcase, rubbish bin and sack on the blackboard / flipchart. Each of the figures symbolises one category of opinion about the evaluated activity:

  • Suitcase: “What do I take with me from the training?” (what will be useful to me, what will I use in the future)
  • Rubbish bin: “What was unnecessary during the training?” (what is not useful to me, what was redundant),
  • Sack: “What was missing?” (what should be added to the next training).

Then you can ask the participants to speak or write down their opinions on sticky notes or directly on the pictures on a flipchart.

 

PRACTICAL TIPS FOR CONDUCTING GROUP ACTIVITIES

It is good for the participants to sit in a circle so that everyone can see each other. To increase their involvement, you can propose that they themselves indicate the next person to talk, e.g. by throwing a ball (this solution can be used provided that no one in the group is discriminated against). Oral statements should be noted down – this can be done by the person conducting the classes while they are taking place (e.g. on the blackboard, flipchart) or by their assistant.

 

3.3. How to choose appropriate research methods

Research methods must fit well with the evaluation concept and plan. To make the right choice, consider whether the methods are relevant to:

  • The purpose, subject, scope and type of evaluation, as well as the criteria and evaluation questions – will these methods provide you with the information necessary to answer your evaluation questions?
  • The data sources from which you plan to obtain information – will it be appropriate to provide information on the groups that will take part in the evaluation research?
  • The characteristic of the interviewees / respondents – do the methods take into account group size, their perceptive capabilities, communication abilities, health condition, etc.?
  • The circumstances of the data collection – will all the necessary data and interviewees / respondents be available at a particular moment? Will the chosen method suit the place of data collection?
  • The resources you have access to? – does the method require availability of qualified or independent researchers and other resources (organisational, technical, financial and time)? Will you be able to use the method on your own? Do your resources make you able to use it?

Knowledge of research methods (quantitative and qualitative) and related tools will help in preparing the second part of the evaluation concept (see chapter 2.4, tool 4), which will be supplemented with methodological issues. This element enables you to gather information to answer evaluation questions.

Tool 6: Logic Matrix Of The Evaluation Research

 

3.4. How to design research tools

A common mistake is to start an evaluation by creating research tools, e.g. a questionnaire for project recipients. You must remember that you will not be able to choose the right research methods or prepare the right measurement tools (e.g. scenarios, questionnaires, observation sheets) in isolation / detached from the overall concept of evaluation. Therefore, start constructing research tools after determining:

    • The subject, scope and purpose of the evaluation,
    • Evaluation criteria and questions,
    • Studied groups of people and research methods.

Without referring to the above elements, you are not able to create correct research tools, because you may include questions that are unrelated to the purpose of the research, making it impossible to answer evaluation questions and respond to evaluation criteria. “Bad” tools contain useless questions, are overloaded or incomplete, do not provide relevant information and do not allow for the formulation of meaningful recommendations.

The questions included in the research tools are a particularisation of the evaluation questions. Remember that these questions evaluators ask themselves, not the respondents! These two types of questions should not be confused as they are formulated in languages adjusted to the needs of:

  • Evaluators / evaluation stakeholders → evaluation questions,
  • Studied groups of persons (interviewees, respondents)→ questions in research tools.

If you are not sure whether a particular question should be put to the interviewees / respondents, consider whether they will be able to answer it, and the information obtained will allow you to answer the evaluation questions and formulate useful recommendations.

 

HOW TO ASK QUESTIONS

  • The number of questions included in the tools should be appropriate to the purpose and duration of the research.
  • Research tools should have a transparent structure, with the main issues identified (e.g. “reasons for joining the project”, “assessment of different types of support”, “effects of participation in the project”). Topics should be grouped thematically (e.g. organisational issues).
  • Questions should be asked in a specific order. Put preliminary questions (relatively easy) at the beginning of your tool. They should be followed by introductory questions in the subject (not very difficult), then main questions (key for the purpose of the research). Put the most difficult questions in the middle of the tool. Finally, ask summary and closing questions.
  • Questions should be asked in a logical order that cannot surprise or confuse the research participants. Each question should follow on from the previous one or – in the case of an interview – refer to the respondent’s statements.
  • The language of an interview should be easy to understand: use as short sentences as possible, use a language close to the research participants – without foreign words, specialised terminology, jargon, abbreviations.
  • Questions should be formulated precisely – e.g. there should be no doubt what period of time they relate to (don’t ask “whether recently …”, but “whether in the last week / month / year …”)
  • Do not ask for several issues in one question (“what are the strengths and weaknesses of the project?”) and do not use negative questions (“shouldn’t you …”, “don’t you prefer …”). Each of these errors makes it difficult to understand the questions and interpret the answers.
  • Questions and proposed answers must not be sensitive to the research participants – they cannot lead to the disclosure of traumatic experiences, declaration of behaviour or beliefs contrary to the law or morality. When anonymity is not guaranteed, do not ask about property status, family matters or health issues.
  • Do not ask questions suggesting an answer – do not present any of the options as being in accordance with the rule of law or morality, do not refer to the authorities or the opinion of the majority.

The differences between quantitative and qualitative research tools, the structure / construction of scenarios and questionnaires and the most common mistakes in their design are discussed in the online course.

IV. CONSIDERATIONS WHEN EVALUATING PROJECTS AIMED AT YOUNG PEOPLE AGED 15-24

 

When undertaking the evaluation of projects aimed
at young people aged 15-24, you should take into account that people of that age are different from adults, mostly because of their legal situation, living and technological conditions, and psychological and social needs related to intensive development processes on the verge of adulthood.

 

4.1. What are the standards of conducting research on young people?

The United Nations Convention on the Rights of the Child and many additional provisions in individual countries guarantee special legal protection for persons under the age of 18. According to the law, a person under the age of 18 is a child. Although in most countries one acquires certain rights at the age of 15 (for example the right to choose one’s school, the right to take up work), a minor’s participation in YEEAs projects as well as in various types of research requires the consent of their parent or legal guardian.

 

4.1.1. Consent for a minor’s participation in evaluation research

  1. Consent for participation in evaluation studies from both the minor and his/her parent or legal guardian must refer to the specific research (name of the research or evaluated project and the entity or entities conducting it).
  2. The person giving consent for a minor’s participation in the research should receive all the necessary information, such as:
    1. The purpose of the research and how the findings will be used,
    2. The scope and method of collecting information to be obtained from the research participant, including whether the research requires multiple contact with the participant, especially a long time after the first round of research,
    3. Assurance of anonymity and protection of confidentiality of data obtained about the participant in the research,
    4. Information about the right to refuse to participate in the research and to withdraw from participation at any stage.
  3. It should also be remembered that in EU countries it is necessary to obtain consent for the processing and storage of personal data.
  4. If it is planned to use sound and video recording devices – also explicit consent must be given.
  5. Examples of documents used to obtain consent for a minor’s participation in research are included in the Annexes (Annexes 1 and 2).

It is worth obtaining such consent at the beginning of the evaluated project because it can be obtained with more general consent for a minor’s participation in the project (e.g. in the same document).

 

4.1.2. Protection of minors in the ethical codes of professional researchers

The basic guidelines for conducting research among people under 18 are:

  • Obtaining informed consent (described above) from the minor and their legal guardian,
  • Providing a sense of security to those examined by the research staff (e.g. the researcher does not attempt to make first contact with minors without the presence of the adult responsible for the child (teacher, guardian, parent); the person collecting the information has documents confirming their status as a researcher; the training and experience of the people conducting the research guarantee the safety and the way of carrying out the research appropriate to the specificity of young people),
  • Ensuring that all the information provided, including the questions put to the interviewees / respondents, can be understood (it is helpful in this respect to test quantitative tools on a small scale before applying them and to discuss the tools with specialists),
  • Ensuring that the scope or method of obtaining information from young people will not directly cause any material or non-material harm, including harm related to mental well-being and social relations; this applies in particular to such issues as:
    • Sensitive issues that lower the sense of autonomy or self-esteem,
    • Relationships with their peer group and other important people.

If you have any doubts, it is worth consulting specialists.

  • Compliance with the general principles of social research, including in particular:
    • Guaranteeing the confidentiality of information obtained from the research participants both at the stage of data collection (no participation of other people apart from the researchers and the respondents during data processing (anonymisation/pseudonymisation), as well as in publishing the findings (collective presentation of quantitative data, pseudonymisation of qualitative data),
    • Ensuring the anonymity of the research participants,
    • Ensuring the safety and undisturbed work of the researchers.
  • Standards for conducting research on minors are included in the codes of ethics in force in the communities of professionals conducting social and market research.

4.2. How to adjust the methodology of evaluation research to a young person’s way of life?

 

4.2.1. Major activity – formal education

Studying is the dominant activity in the life of young people aged 15-24. For instance, in Poland, until the age of 18, participation in formal education is compulsory, although training in the form of “vocational preparation” combined with paid work is also allowed. However, the findings of the Labour Force Survey show that the vast majority of those aged 18-24 still participate in organised forms of education. Young people study full-time in schools or colleges, but often also part-time, attending courses or training. Also, many of the YEEAs activities are conducted in the form of group learning activities. Grouping the beneficiaries of the evaluated project in one place and time allows you to carry out various types of activities related to evaluation, primarily to collect data through observation, central location, focus group interviews, etc.

However, you should bear in mind that when conducting research in educational institutions, you should ensure there are appropriate conditions for collecting data, such as: an isolated room, dedicated time (respondents should not be under time pressure).

 

4.2.2. Weak position on the labour market

One of the basic elements of the situation of young people, which is also the main area of influence of YEEAs projects, is their situation on the labour market. In studies devoted to this subject, in relation to young people it should be taken into account that:

  • In the 15-24 age group, only about every third person performs any paid work (including free help for a family member’s paid work) – so you should never ask questions with the assumption that a particular person is working or has income from work,
  • Work by young people, especially those under the age of 18, occurs in highly diversified, often atypical forms, e.g. as free help in the paid work of a close family member, as a one-time job, occasional work, holiday work, part-time work, replacement, “trial” work, various types of internships, apprenticeships and vocational preparation, in which the proportion of study to work and earnings vary widely and may or may not be considered work, providing work in exchange for accommodation, food and “pocket money”, promoting products or services on social media in exchange for the goods or services received, voluntary work with various levels of covering own costs, work performed under various contracts, ranging from regular employment contracts to specific contracts, undeclared work such as tutoring, income for illegal activities.

When asking young people about work, you need to precisely define what kind of activity you consider to be work and / or what features are decisive for you (legality, type and amount of remuneration, time dimension, stability, linkage with educational obligations, legal form).

 

4.2.3. Increased mobility

People aged 15-24 change their place of residence much more often than older people. They also exhibit higher than average daily mobility. As a result, traditional methods of collecting quantitative data based on a home address in the case of young people do not work – a postal questionnaire is often sent to an address that no longer applies, the interviewer comes when no one is there.

Therefore, in the case of young people, it is particularly important to obtain their mobile contact details, such as a phone number or the name of an individual profile on a messaging app, and then base a data collection strategy using electronic tools on these contact details. The findings of studies using both ae postal questionnaire and the CAWI method show that the response rate in the case of the latter is much higher and it increases the lower the respondent’s age.

 

4.2.4. Dominance of smartphones in everyday communication

Young people are more willing than older people to use electronic technologies than paper. They are also much more efficient at this and are more willing to deal with all matters of everyday life using a smartphone than a computer. Therefore, in research among young people it is worth using electronic research tools, and best to adapt them to smartphones (one simple question per screen, simple and legible form, not too long a list of answers). One example of such an application that can be used for working with young people is Kahoot.

 

4.2.5. Busy and overstimulated life

A characteristic feature of modern youth is their openness to many stimuli delivered via smartphones, which young people never let out of their sight. Moreover, learning, developing one’s own interests, and above all social life, often result in stimulating how young people function by forgetting about unusual or less important obligations, such as filling out a questionnaire. To counteract this, it is important to regularly send messages reminding participants about the dates of scheduled interviews, their promises to complete a survey, etc.

 

4.2.6. Widespread use of social media

The widespread use of social media by young people, including their presence in numerous social media groups, is increasingly being used for research purposes. It is possible to find groups of young people from a particular locality or school, as well as those with specific musical and ideological interests, etc. After entering the group, the possibilities of recruiting research participants (e.g. to the comparative group) open up. You may consider asking individual group members a question as a researcher, or (if the group moderator agrees) publicly posting a link to the online survey or request for contact. It is better not to open a public discussion at the Internet group level as this prevents the research from being confidential, exposes the participants to being assessed by other group members, and the public nature of statements lowers their credibility.

Following the example of market research agencies, you could also consider establishing a special community group (MROC method – Market Research Online Communities), in which young project beneficiaries would agree to participate. However, such activities require a precise definition of the group’s goal. If the purpose is research – then it should be a short-term group (MROC), and during this period it should be professionally moderated, similarly to Focus Group Interviews (FGI).

 

4.2.7. Difficulties in reaching NEETs

Difficulties characteristic for research among young people intensify when the evaluated project is aimed at young people who are not studying or working, who are not covered by any form of education, support or institutional supervision that groups them (NEETs). Reaching young people who are in such a situation is a serious challenge, especially when you need data for comparisons with NEETs who participate in the project.

Often, the only solution to this type of problem is to compare groups participating in different projects from the same programme, or to compare the results obtained in the group covered by the project with the group of candidates who did not become its beneficiaries (taking into account the impact of the reasons for not qualifying for the project).

4.3. How to deal with the psychological and social needs of young people

4.3.1. Increased need for confidentiality of the provided information

The key psycho-social factors that should be taken into account when planning and conducting research involving young people is their particular susceptibility to influences. This results both from their emerging personality as well as from a fear of judgement and even sanctions that may befall a young person both on the part of the peer group and adults, on whom the young person depends mentally and financially. The latter include project staff. Taking this into account, one should:

  • Inform the research participant about the confidentiality of the information provided and the measures taken for this purpose, both by means of data collection ensuring confidentiality, as well as their anonymization at the stage of data analysis and use of the findings,
  • Complete complex assurances, including by conducting interviews (IDI, FGI) without the participation of third parties, creating conditions for completing the questionnaires that guarantee anonymity and confidentiality, including throwing auditorium questionnaires into a collection box,

 

4.3.2. Increased need for autonomy and emancipation

According to the findings of developmental psychology, people aged 15-24 are – due to shaping their identity – particularly sensitive to issues related to respect for their freedom. Consequently, their right to participate or not to participate in research should be clearly communicated and the reasons and consequences of each of the choices available should be clearly explained. This is a necessary condition.

On the other hand, positive motivation for young people to participate in evaluation research can be created by responding to their needs to move from subordinate and executive positions to the role of co-decision makers and co-creators. In order for young people to be really involved in evaluation research you have to treat them as partners with different roles, including decision-making and consultative roles, in addition to the roles of the classic examined object. This can be achieved by involving them in the various stages of the evaluation process, from reporting information needs, through co-deciding on priorities, planning, participating in implementation, and finally consulting the findings (see section 2.2).

V. DATA ANALYSIS

 

Once you finish collecting the data, you should start analysing it. This means using all the research material (information obtained with various methods) and answering evaluation questions as well as valuing the evaluated project according to chosen criteria. Therefore, at this stage, it is worth going back to the evaluation concept, which acts as a compass, leading the evaluator through the entire research process (not only information collection, but also data analysis, drawing conclusions and formulating recommendations).

 

The purpose of data analysis is:

  • Compilation and verification of collected information,
  • Description, assessment and juxtaposition of the quantitative and qualitative data that is obtained (checking how reliable and consistent they are),
  • Identification and explanation of various cause and effect relationships that will allow you to understand the mechanisms of the studied phenomena,
  • Interpretation of the obtained evaluation findings in relation to wider knowledge about the subject of the evaluation (evaluandum),
  • Obtaining detailed answers to evaluation questions and credible valuing of the evaluandum according to chosen criteria,
  • Drawing conclusions from the collected information and formulating useful recommendations based on it.

In the data analysis, you should bear in mind the principle of triangulation, i.e. the compilation of data obtained from various sources, using various research methods, by different researchers. Thanks to this, you have the opportunity to supplement, deepen and verify respective information in order to obtain a full picture of the evaluated project.

Although during data analysis the actions undertaken are common to both types of data (quantitative and qualitative), such as reduction, presentation and concluding, the obtained findings are in a different form for each of them. The comparison of these data is presented in the table below.

Before starting the data analysis, it is necessary to check whether all research materials have been anonymised, i.e. there are no personal data (names, surnames, addresses, including e-mail addresses, telephone numbers etc., as well as contextual information enabling the identification of research participants). Interviewees who participated in the qualitative part of the research (IDIs, FGIs) are given pseudonyms, e.g. taking into account the features important for the researcher. The personal information concerning research participants should be separated from the content data provided by them.

 

There are four main stages of data analysis:

1. Selection and ordering of the collected research material – during this stage, the correctness and completeness of the data are checked, the reliability of every piece of information is verified (thanks to triangulation), and data that is not useful for the purpose of the evaluation is removed. You should collect all the information and facilitate its further analysis – recordings of the interviews can be transcribed or written down in accordance with a previously prepared scheme (which includes a summary of the respondents’ statements). In the case of a survey, you should remove uncompleted questionnaires from the analysis, etc.

2. Constructing analytical categories (selecting the type of encoding and data coding – their categorisation and classification) – this means assigning codes / “labels” to each piece of information obtained, representing specific categories of information, thus allowing for the organisation of the research material.

  • In the case of closed-ended questions, the answer codes take a numerical form (e.g. “female” = 1, “male” = 2), which allows you to analyse the obtained data using statistical programs (or spreadsheets). First, you need to create a coding instruction that contains the names of codes and the numbers which were used in the questionnaire to identify answers given by the respondents to particular questions. Paper surveys require manual coding – to do this, you need to number the answers in the questionnaire, code the answers and enter this information into the database. Electronic surveys are coded automatically.
  • In the case of open-ended questions and other qualitative data, the codes for particular answers have verbal form (e.g. “training organisation”, “conducting a training”). Codes for qualitative data can be planned before or after reading the entire material. The first method is called “top-down” coding, which results from a good knowledge of the research problem and / or its grounding in a given theory. The second method is open coding (“bottom-up”), which consists of categories identified in the collected material (e.g. relating to research questions). In both cases, you need to develop a coding scheme that will organise the codes (establish a code hierarchy, superior / collective and detailed codes), so that you can present the collected information in a consistent form.

The information corresponding to the given codes can be summarised in one table, which will facilitate the search for similar or common elements for the research participants as well as information that differentiate them. It also allows you to see the relationship between the interviewees’ characteristics or situation and their statements.

Tool 7: Table For Summarising Information From Interviews

3. Analysis and interpretation of the obtained findings (explanation and assessment by the researcher of a particular issue / problem)

Data analysis is an important element of evaluation because it allows you to summarise the findings and find common and divergent elements in the collected materials. It is worth choosing and describing the method of data analysis at the stage of planning the evaluation. Data obtained during evaluation can be analysed in a number of ways. The simplest distinction is division into:

  • Quantitative data analysis (numbers, answers to closed questions) – for simple analyses you can use, for example, MS Excel, and for more complex analyses statistical programs, such as SPSS or Statistica, operated by specialists, whose services can be used if necessary.

PRACTICAL TIP

For small groups, quantitative data should not be presented in the form of percentages, i.e. informing that 20% of respondents in a group of ten have a particular opinion. Better to use absolute numbers and say that it is two people.

  • Qualitative data analysis (e.g. text, interview statements) – for simple analyses, it is enough to compile the data in a chart / matrix, and for more extensive research material, it is worth using programs that facilitate the analysis, e.g. QDA Miner, OpenCode, Weft QDA.

Some of them are briefly presented in the table below:

Own elaboration based on: Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

 

IMPORTANT TIP

When analysing the data, it is very important to determine what changes have occurred as a result of the project and what role respective activities played in them. Therefore, it is necessary to answer the question to what extent the project activities influenced the achievement of the assumed result indicators and what was the role of project activities among other factors influencing the expected changes (see chapter 2.5).

 

When analysing data, it is worth referring to the previously described theory of change adopted as part of the description of the project logic. When planning the change at the beginning of the project, you made certain assumptions about the conditions that must be met (resources provided, implemented activities) in order to achieve the given results, i.e. you have planned the cause-and-effect chain. Evaluation verifies our theory of change – it can confirm it or show some gaps in it (e.g. missing / redundant elements) and recommend improvements for the future.

There are three general strategies for causal inference. Using a combination of these strategies can help to increase the credibility of the conclusions drawn:

Data analysis approaches for causal attribution with various options

Own elaboration based on: Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.

VI. REPORTING

 

6.1. How to make use of the results of data analysis?

After completing the qualitative and quantitative data analysis stage, you have a lot of information, which should be used properly and wisely. These data should be translated into knowledge that will allow you to make accurate decisions regarding project improvement (e.g. how to adapt it better to the needs of its recipients, how to achieve similar effects using smaller resources, how to obtain greater impact and sustainability of the results).

Based on the findings of conducted analyses, you can draw conclusions that relate to phenomena or problems identified during the evaluation. These conclusions relate primarily to the issues described in the evaluation questions but may also include issues that were additionally diagnosed during the research.

In the evaluation report, you should present not only the findings of the evaluation research, but also their interpretation (i.e. reference to a broader knowledge of the studied issue), as well as the conclusions derived from the obtained data and the accompanying recommendations. The above diagram presents the relationships between these elements. To get through this process, you can use the questions that accompany the subsequent stages (in the diagram above they are marked in italics).

Below you can find an example of the process of formulating conclusions and recommendations regarding a training project directed to NEETs (the findings refer to the quantitative part of the research).

Tool 8: The relation between the evaluation’s findings, their interpretations, conclusions and recommendations

Remember to take into account various elements related to evaluation research, e.g. used methods (qualitative, quantitative), sample selection methods and degree of responsiveness (level of return of questionnaires), which may lead to some limitations when formulating conclusions.

 

RULES FOR FORMULATING THE CONCLUSIONS:

  • Treat your conclusions critically, look at them from a distance, constantly seeking alternative explanations for the phenomena found. It is always worth consulting your conclusions with another, preferably more experienced person (“a critical friend”) who – thanks to not being involved in the evaluation – will look at them with a “fresh eye”.
  • Make sure that you correctly interpret the statements given by the research participants, e.g. by confronting the conclusions with them. If you are not completely sure about a conclusion, soften it by using the terms “probably”, “possibly”, “maybe”.
  • Do not generalise the conclusions for the whole population (i.e. people who did not participate in the research) if you used qualitative methods* or the sample you studied was not randomly chosen.
  • Learn how to avoid mistakes in drawing conclusions from our online course.

HOW TO FORMULATE THE RECOMMENDATIONS?

  • Group them thematically (e.g. project management, cooperation with partners, implemented activities, project effects).
  • Relate them to both strengths and weaknesses of the subject of evaluation. Don’t focus only on the negatives – also show those areas that work well and don’t need any changes. If you concentrate solely on positives, it will undermine the credibility of the evaluation.
  • Make sure that recommendations are detailed, precise and realistic (possible to implement), so that they are also practical, accurate and useful.
  • Assign to each recommendation: a recipient (with whom it will be agreed in advance), a deadline, and a degree of importance, as this increases the chances of them being implemented.

* In this case, the conclusions relate only to the persons who participate in the research.

 

Conclusions and recommendations can be presented in a concise table as a summary of the report, or as an independent “final product” of the evaluation. The following is an example of a recommendation table regarding the evaluation of a training project:

Tool 9A: Recommendations Table

In the simplified version, the table of conclusions and recommendations may look like this:

Tool 9B: Simplified Recommendations Table

 

6.2. What are the features of a good report?

The report is the finalisation of the evaluation process, because it presents its concept, course of research and its findings, as well as conclusions and recommendations that are based on them.

During the evaluation process, various types of reports may be written, e.g.:

The final report can be prepared in various forms, which – like the scope of content presented in them – should be tailored to the needs of individual groups of recipients (evaluation stakeholders). Examples of ways to present and promote evaluation findings include:

    • The final report in an electronic version (less often in a paper version) distributed to stakeholders and / or posted on the Internet (e.g. on the project website or the entity ordering the evaluation website),
    • Summaries of reports in the form of information folders / brochures containing key conclusions and recommendations,
    • A multimedia presentation during conferences and meetings, e.g. with stakeholders, partners,
    • An infographic posted on the project website, on social media, and sent to local media,
    • Printed posters presented at various events, e.g. conferences, picnics,
    • Films (video presentations) addressed to large audiences (including a dispersed audience), and posted on the Internet,
    • Follow-up – presentation on the effects of implementing the recommendation.

The report in the version of the extended text document may have the following structure:

    • Title page – name of the contracting institution, name of the institution conducting the evaluation (if the evaluation was external), date of preparation, authors, title (e.g. Ex-post evaluation of project X),
    • (Executive) summary – main elements of the evaluation concept, key findings, conclusions and recommendations (necessary for extensive reports),
    • Table of contents – enabling automatic access to a given page of the report,
    • List of abbreviations (and possible definitions of specialised terms),
    • Introduction – information on the commissioning institution, type and cut-off date of the evaluation, name of the evaluated project, sources of its financing, and organisation that has implemented it,
    • Subject and scope of the evaluation – a brief description of the evaluated project and its parts which were included in the evaluation,
    • Goals of the evaluation – explanation of what the evaluation was conducted for, what was expected of it,
    • Evaluation criteria and questions – an indication of how the value of the subject of the evaluation was estimated / what was supposed to be learnt through the evaluation,
    • Methodological issues – description of sources of information and research methods used, sample selection methods, course of the research, levels of responsiveness (what percentage of respondents participated in the survey).It is also worth describing the problems encountered during the implementation of the research, as well as the ways and effects of dealing with them,
    • Description of evaluation findings – a description of the qualitative and quantitative findings collected during the research, along with their interpretation, according to the adopted method of presentation (e.g. in accordance with evaluation criteria / questions). Findings from different sources and obtained with different methods should be confronted (by triangulation). Every chapter can present partial summaries,
    • Conclusions and recommendations – a concise but substantive answer to evaluation questions. The conclusions must be based on the findings of the study and the recommendations should be closely related to them,
    • Attachments / annexes (optional) – e.g. research tools used, tabular summaries, case studies, etc.

It is worth remembering that regardless of what form of report you choose, both in the case of external and internal evaluation, any changes to the content of this document require the consent of the evaluator.

  • If you want to learn more about the table of comments to the evaluation report, click here.

A good evaluation report should meet the following conditions:

  • be adequate to the terms of the contract and the needs of the recipients, be written in a language they understand,
  • contain a list of abbreviations used (and possible definitions of key terms when, for example, a report is to be presented to a wider audience that may not know them),
  • have a clear and legible structure,
  • have a concise form, and at the same time comprehensively answer evaluation questions (without “waffling”),
  • be based on credible and reliable findings that have been properly analysed,
  • present not only the obtained findings, but also its interpretation, as well as indicate the relationship between the data and the conclusions,
  • contain justified conclusions and useful recommendations related to them,
  • contain graphic elements (tables, charts, diagrams) and quotes from respondents’ statements that make the reception of the report content more attractive.

The following table will help you in verifying the quality of the evaluation report. It contains detailed criteria for its assessment. You can choose its scale (numeric or verbal) and assess your own or a commissioned report.

Tool 10: Report Quality Assessment Table

 

6.3. How to deliver what is needed for the recipients of your evaluation

The possibility of using evaluation findings depends on its type, i.e. the moment / life cycle of the project in which the evaluation is carried out.

The most chances for introducing changes are provided by ex-ante evaluation, carried out at a time when the evaluated undertaking / project has not yet started.

In the case of mid-term evaluation, the opportunities for using recommendations to introduce specific changes are limited as the project is in progress and individual actions are gradually implemented. Nevertheless, some of its elements may still be modified, e.g. in order to better adapt the ongoing activities to the needs of their beneficiaries, to ensure that the planned indicators are achieved at the assumed level, or to adapt them to the changed project implementation conditions.

The findings of ex-post evaluation can only help you in planning the next (same or similar) projects because the evaluated project has already been completed.

When evaluation findings are related to organisational or management issues, you can use them for current work.

The dissemination of evaluation findings (most often in the form of conclusions and recommendations) among its stakeholders is a very important stage, as it contributes to a better understanding of the need for change, to strengthening the cooperation, commitment and motivation to act, as well as to obtaining support in this process.

Sharing the findings of the evaluation with other people / entities may show your ability to self-reflect on the value and quality of your activities. It is a sign of your readiness to engage in discussion on various aspects of the subject of the evaluation, as well as the ability to assess its strengths and weaknesses and the desire to develop and improve in cooperation with other stakeholders.

Tool 11: Dissemination Of Evaluation Findings Table

AFTERWORD

 

If you are reading these pages, you probably have read the whole thing and have learned how to conduct evaluation of your projects and what it is for, especially if these are youth employment projects and even more if you are interested in assessing their real (net) impact.

Thanks to the participatory approach to the evaluation you acquire information that is vital for key decisions about the project and also very important for the stakeholders, especially the donors. What is more, the beneficiaries get empowered and the project team get better informed, coordinated and motivated. Finally, you are on the way towards a more relevant, effective, sustainable, efficient and simply better project!

To make it easier to prepare your evaluation – you can use templates of evaluation tools – see Attachments. And to make your understanding of the evaluation even deeper – check the online course and networking activities of the Youth Impact project – all available at the website www.youth-impact.eu.

Learn more

Interesting sources to learn more about evaluation available online:

REFERENCES

 

  • Babbie E. (1st edition in 1975) The Practice of Social Research
  • Babbie E. (1st edition in 1999) The Basics of Social Research.
  • Bartosiewicz-Niziołek M., Marcinkowska-Bachlińska M., et al (2014) Zaproszenie do ewaluacji, zaproszenie do rozwoju [Invitation to evaluation, invitation to development], KOWEZiU, Warszawa (s.69-85)
  • Bartosiewicz-Niziołek M. (2012) Ewaluacja programów i przedsięwzięć społecznych – katalog dobrych praktyk [Evaluation of social programmes and undertakings – catalogue of good practices], ROPS, Kraków
  • Bienias S., Gapski T., Jąkalski J. (2012) Ewaluacja. Poradnik dla pracowników administracji publicznej [Evaluation. A guide for public administration employees] Ministerstwo Rozwoju Regionalnego, Warszawa
  • Blalock H. (1st edition in 1960) Social Statistics.
  • Checkoway, B., Richards-Schuster, K. Participatory evaluation with young people, W.G. Kellogg Foundation
  • Ferguson G. A., Takane Y. (1st edition in 1971) Statistical analysis in psychology and education.
  • Flick, U. (1st edition in 2007). Designing Qualitative Research.
  • Flick, U. (1st edition in 2007). Managing the Quality of Qualitative Research.
  • Gibbs, Graham R. (2009) Analyzing Qualitative Data.
  • Kloosterman, P., Giebel, K., Senyuva, O., (2007) T-Kit 10: Educational Evaluation in Youth Work, Council of Europe Publishing
  • Kvale S. (2007) Doing Interviews.
  • Lisowski G., Haman J., Jasiński, M (2008) Podstawy statystyki dla socjologów [Basics of statistics for sociologists], Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Jak napisać raport ewaluacyjny [How to write an evaluation report], Ministerstwo rozwoju Regionalnego, Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Język raportów ewaluacyjnych [The language of evaluation reports]. Ministerstwo rozwoju Regionalnego, Warszawa
  • Miles, M. B., Huberman, A. M. (1st edition in 1983) Qualitative Data Analysis.
  • Nikodemska-Wołowik A., M. (1999). Jakościowe badania marketingowe [Qualitative marketing research], Polskie Wydawnictwo Ekonomiczne, Warszawa
  • Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.
  • Rapley T. (2007) Doing Conversation, Discourse and Document Analysis.
  • Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.
  • Silverman, D. (1st edition 1993) Interpreting Qualitative Data.
  • Wieczorkowska G., Kochański, P., Eljaszuk, M. (2005) Statystyka. Wprowadzenie do analizy danych sondażowych i eksperymentalnych [Statistics. Introduction to the analysis of survey and experimental data], Wyd. Naukowe Scholar, Warszawa
  • W.K. Kellogg Foundation (2004). Logic Model.

YOUTH EMPLOYMENT EVALUATION TOOLKIT

Your leverage to better youth employment projects

 

TOOLKIT PDF

TOOLKIT E-BOOK

 

Authors: Monika Bartosiewicz-Niziołek, Sławomir Nałęcz, Zofia Penza-Gabler, Ewa Pintera

 

INTRODUCTION

The purpose of this toolkit is to present practical tools supporting the evaluation of projects aimed at increasing the employment of young people.

The main recipients of this toolkit are NGOs and other entities which want to analyse their projects in the abovementioned area. Such evaluation may be aimed at:

  • Measurement of the project’s effectiveness in achieving project goals and results (outputs, outcomes),
  • Assessment of the usefulness of the project for its beneficiaries / participants and the sustainability of the achieved results,
  • Better adaptation of the project to the needs of its beneficiaries and the labour market,
  • Examination of the project impact on a wider group of people who did not participate directly in it (e.g. families, friends of the project beneficiaries),
  • Assessment of project efficiency in terms of resources engaged in the project and its effects.

This toolbox is a supplementary material to the course “Towards better youth employment projects – learning course on evaluation”, available HERE. While during the course you can get knowledge and training in evaluation adjusted to your needs (basic or advanced level), the toolbox provides some universal knowledge matched with practical instructions, tools and examples designed to develop evaluation skills and to support you in using the knowledge acquired during the distant course. This is achieved, among others, by question sets, tables and tool templates facilitating the design and planning of evaluation, gathering the necessary information, and then formulating conclusions and recommendations aimed at improving the projects carried out by your organisation.

The toolbox has been developed by the Jerzy Regulski Foundation in Support of Local Democracy in Poland, in cooperation with the Research Institute for Innovative and Preventive Job Design (FIAP e.V., Germany), Channel Crossings (Czech Republic), and PEDAL Consulting (Slovakia), within the framework of the Youth Impact project, financed by the EEA Financial Mechanism and the Norwegian Financial Mechanism. The project seeks to provide tools and services to improve the ability and capacity of Youth Employment and Entrepreneurship Support Actions implementers to efficiently evaluate the impact of their activities. The action will be carried out in the years 2019-2022.

The activities in this project are aimed at developing the evaluation competences of entities that support employment and entrepreneurship of young people.

GLOSSARY OF PROJECT TERMS

 

Activity (of the evaluated project) – actions aimed at a specific target group, which contribute to the achievement of the planned outputs and outcomes, and then to the achievement of the project objectives.

Example: Training 20 young mothers (who were unemployed at the start of the project and had to be supported by social welfare benefits) in dyeing fabrics in town X.

 

Generalisation – referring the findings obtained in the study of the sample to the entire population (i.e. also to units that have not participated in this research). Based on the results of the sample, we conclude – with a given level of probability – that the findings (characteristics / opinions) for the entire population are similar.

 

Impact – the effects of activities, outputs and outcomes of the project, contributing in the long term (apart from possible other projects / interventions and factors) to changes affecting a wider community than the direct recipients of this project.

Example: Improving the living conditions of children raised by women who found a job thanks to the professional competences acquired in the project.

 

Impact indicator – informs about the delayed effects of the project that go beyond its immediate recipients. These effects usually cover the social environment / community of the project beneficiaries and may result from the accumulation of various factors (including non-project activities).

Example: The percentage of project beneficiaries whose household did not have to be supported by social welfare benefits 18 months after the end of the project.

 

Logic matrix of the project – a table used to determine the methodology of measuring selected project elements such as output, outcome or impact. The matrix defines the indicators by which a given element will be measured, the measurement method, and assumptions / conditions of achieving the project’s effects (see chapter 2.1).

 

Logic model of change – a comprehensive tool for project planning and subsequent management of its implementation. It depicts the logic of intervention linking the individual elements of the project with cause-and-effect ties (see chapter 2.1).

 

Monitoring – ongoing collection, analysis and documentation of information during the project implementation concerning the progress of its implementation in relation to the planned schedule of activities and a budget.

 

NEET (not in employment, education or training) – the name of the group, mainly young people, who remain outside the sphere of employment and education, i.e. people who do not study, work or prepare to practice, due to various reasons (discouragement, life crisis, disability, parental or family responsibilities).

Objective (general) – expected state or effects of activities conducted within a project, planned to be achieved within a specified time.

Example: Increasing employment by 2022 among young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X.

 

On the way to achieving the general objective you can have specific objectives (purposes). A specific objective is a planned state that will be achieved as a result of the implementation of certain activities. It should be consistent with the general objective and contribute to its achievement.

Example: Increasing by the end of 2021 the professional competences of young mothers (who were unemployed in 2020 and had to be supported by social welfare benefits) in town X to the level expected by employers in this town.

 

Outcome – direct and immediate effects / changes that refer to the beneficiaries as a result of the implementation of specific project activities.

Example: The growth of project beneficiaries’ competences related to dyeing fabrics.

 

Outcome indicator – informs about the degree of the achieved changes related to the project beneficiaries as a result of their participation in project activities and the use of outputs produced at a particular stage of project implementation.

Example: The number of beneficiaries who have acquired the professional skills of dyeing fabrics.

 

Output – a short-term effect of a particular activity in a material form (of a countable nature), e.g. a thing, an object, an event (of service delivery). These may be goods or services transferred to the project recipients, which are to contribute to the achievement of the planned outcomes.

Example: Training materials, certificates confirming the acquisition of professional qualifications in the field of dyeing fabrics by project beneficiaries.

 

Output indicator – informs about the implementation of activities that resulted in measurable products.

Examples: The number of issued certificates confirming the acquisition of specific professional competences, the number of people who have achieved a certain level of these competences, an increase in the level of social competences according to the selected test, the number of cover letters and CVs prepared by the training participants, the number of textbooks prepared.

 

Population – the group of individuals (e.g. specific people, organisations, companies, schools, institutions) that are the subject of the researcher’s research / object of interest.

 

Project (intervention) – a set of activities aimed at producing the intended outputs and outcomes, which, when used by the project’s target group, should bring the planned objectives and impact.

 

Representative sample is a sample that well reflects / represents the studied population and makes it possible to accurately estimate its features through generalisation.

 

Sample selection – selecting from the population cases that will form the sample (smaller part of the population). It is conducted in a specific way (random or non-random) based on the sampling frame, i.e. a compilation (list) of all units forming the population from which the sample is drawn.

I. THE BENEFITS OF EVALUATION

 

There are many ways to understand evaluation. According to the approach applied in the Youth Impact project, the main goal of evaluation is to value the project effects in order to improve them. This assessment is based on evidence that is collected by using social sciences methodology with regard to the change caused by the project.

Our approach largely refers to impact evaluation in its broad sense (later we use the term impact-focused evaluation to underline that we want to embrace not only experimental and quasi-experimental designs). It is an evidence-based reflection on the real (net) effects of a project. It allows you to understand the factors influencing the ongoing and delayed changes and focus on the sustainability of the achieved outcomes as well as the impact of the project that goes beyond its direct participants. This approach to evaluation allows for the formulation of recommendations supporting project management, which contribute to the effective and efficient implementation of its objectives, as well as the organisation’s mission.

Our approach is also a participatory one, taking special care about the needs of various stakeholders and engaging them in planning and other stages of the evaluation.

Such an approach to evaluation makes it possible to determine the value of a particular project and to understand the reasons for its successes and failures. It is also a good management tool for organisations focused on social mission and other “learning” institutions.

 

BENEFITS OF AN EVALUATION DONE WELL:

  • It allows you to predict difficulties before the start of your project (ex-ante evaluation) or notice problems at every stage of its implementation (ongoing or mid-term evaluation), and also allows you to plan actions minimizing identified risks.
  • It gives advice on how to improve an ongoing or completed project to better meet the needs of its recipients, achieve more useful and durable outcomes, have a wider impact and fulfil the planned objectives using fewer resources.
  • It allows you to assess to what extent the expected effects of the project were really caused by the project activities*. Moreover, it makes it easier to decide whether a particular project is worth repeating, disseminating, or could be adapted to a different target group.
  • It increases the motivation of employees – involving the project team in evaluation (especially at the design stage and discussing the evaluation findings), increases the sense of agency, emphasises the relationship between the work performed and the planned goals, the organisation’s mission and employees’ own values.
  • It increases the competences of employees – from issues related to project management to knowledge of the mechanisms of the changes caused by this project.
  • It increases the level of confidence and cooperation with project partners (also in future projects), thanks to taking into account the perspective of external stakeholders.
  • It makes it possible to demonstrate the achieved results and improves cooperation with grant-giving institutions and sponsors, encouraging them to finance subsequent projects.
Example: When applying for a grant or justifying the need for a project, you can quote the evaluation findings concerning a previous, similar project. Providing reliable data may help you convince funders that your project is worth funding.
  • It serves to promote your organisation.
Example: Evaluation findings, including case studies, can be used on social media to promote the organisation’s activities. These could be stories of young people who, thanks to your support, acquired new competences and then found a satisfying job or successfully run their own business.

 

Overall, evaluation has many benefits. Introducing it to everyday work can be a very useful support for managing an organisation – strengthening credibility and improving its image, educating and motivating staff, raising funds by showing evidence of project impact, and above all, the effective fulfilment of the assigned mission.


* This possibility is provided by impact evaluation, which is described in chapter 2.4.

II. PREPARING FOR THE EVALUATION

 

“You can’t do “good” evaluation if you have a poorly planned program”. (Beverly Anderson Parsons,1999)

 

2.1. What you need to know about the project to plan its evaluation?

In the toolkit, we concentrate on impact-focused evaluation. We present practical ways of conducting such evaluation regarding primarily the effects of project activities in terms of an intended change. The subject of our interest are the effects of project activities (outputs, outcomes, impact) and their compliance with the project theory of change (or project theory). The project theory defines the concept of the intended change and plan of the project, including its objectives, activities, expected outputs, outcomes and impact, as well as the way in which they will be measured, and what resources are needed to achieve these effects.

The basic element of the project theory is the logic model of change that compiles information on what the project running organisation needs to accumulate (inputs / resources), the work it needs to do (project activities), and the effects it intends to achieve. The logic model of change for a given project is developed according to the following scheme.

 

Diagram 1: Basic logic model of change

The methods of measuring the project outcomes and the related assumptions are sometimes specified in a separate table called the project logic matrix. The logic model and logic matrix should be part of the project documentation.

In practice, it happens that the logic matrix or even the logic model of change have not been developed or are very selective. A lack of assumptions indicating how you define the success of the project makes it impossible to evaluate it and thus verify whether the planned change took place as well as whether it occurred as a result of the project activities.

 

What to do if there is no logic model of change in the project documentation?

In such a situation, it is necessary to recreate the logic of change behind the project, e.g. based on interviews with the management and project staff, as well as already existing documents such as strategy / project implementation plan, justification for its implementation, application for co-financing, partnership agreement, etc. The following table may help you to reconstruct the logic of the project.

Tool 1: (Re)Construction of Project Logic

The above tabulation of the logic of the project allows you to reflect on the ways of demonstrating the level of achieved effects (outputs, outcomes and impact). This goal is served by defining the indicators by which you will measure the progress of the project. An indicator is an observable attribute (feature) that enables the phenomenon to be measured. Each indicator has a measure (quantitative or qualitative) which informs about the degree / intensity of the occurrence of this phenomenon. In order to measure the change that has occurred as a result of the project implementation, you should determine the values (level) ​​of a given indicator at the beginning and at the end of the project, i.e. the baseline value and the final value. It is also good to know what the minimum required value of the final output indicator is, if such a value was defined at the beginning of the project. More information on indicators can be found in the online course (Module 3).

Tool 2: Table Of Indicators Of Project Effects

 

More examples of targeting indicators in youth employment projects can be found in the Guide on Measuring Decent Jobs for Youth. Monitoring, evaluation and learning in labour market programmes, NOTE 3. ESTABLISHING A MONITORING SYSTEM, p.6-9.

 

2.2. When to start developing an evaluation concept and plan?

It is worth developing the concept of evaluation before starting the project or even during its planning, because it allows you:

  • To initiate an in-depth reflection on the logic and coherence of project activities, their translation into project objectives, as well as factors facilitating and hindering their achievement;
  • To plan in advance the collection of information (data) that enables evaluation questions to be answered (e.g. without the baseline measurement of the level of knowledge and skills of the recipients of the training (before this activity), it will be impossible to reliably demonstrate the change that has been obtained, i.e. an increase in competences, which should take place as a result of this training);
  • To find appropriate funds to conduct the evaluation and to enter into the schedule of project activities that will help to collect relevant data, analyse them and report them;
  • To plan the collection of information in the most efficient way (the cheapest, fastest, easiest) during or after the implementation of project activities.

It is worth remembering that evaluation is a multi-stage process that must be designed and planned well, and then implemented step by step.

Stages of the evaluation process

  1. Diagnosis of evaluation needs
  2. Conceptualisation and planning
  3. Information collection – research implementation
  4. Data analysis and inference
  5. Reporting
  6. Using evaluation results – implementation of recommendations

 

2.3. How to diagnose the evaluation needs of the project stakeholders

Conceptualization and planning of evaluation should not start without identifying who needs the information, conclusions and recommendations from the evaluation and for what purpose. It is good to begin the diagnosis of evaluation needs with the stakeholders of the project to be evaluated.

Project stakeholders are people / entities (institutions, organisations) involved in various ways in the implementation of a particular project, e.g. its beneficiaries, project team, staff implementing project activities (e.g. trainers, psychologists, career advisors), project partners (cooperating organisations or institutions), sponsors / funders, etc.

The participation of project stakeholders in the evaluation is very important as they are potential allies of the evaluator. They can support the entire evaluation process, including the implementation of recommendations that improve the project. Thanks to the involvement of various stakeholders in the evaluation activities, it is possible not only to improve communication and cooperation with partners, beneficiaries and project staff, but also to convince funders to invest in the project currently being implemented or its next edition. If the stakeholders are interested in the project evaluation then conducting the evaluation in a participatory manner – involving the stakeholders in the entire evaluation process, starting with the diagnosis of evaluation needs – should be much easier.

The best way to diagnose evaluation needs while ensuring a high level of stakeholder participation is to conduct a workshop / group interview with representatives of all entities (organisations, institutions) and groups of people involved in a particular project.

If the recipients of the project are young people (e.g. NEETs) or other group who may have concerns about expressing their opinions in public, you should first hold a separate meeting with these beneficiaries and then invite their representatives to participate in a workshop with other stakeholders. This type of workshop with young people or other project recipients with a relatively weak social position should be based on values strengthening the subjectivity of the project beneficiaries (see the example from Participatory evaluation with young people, p. 7-8).

 

Example Of Workshop With Stakeholders

 

The information gathered during the workshop with the participation of stakeholders should be used to prepare the evaluation concept and plan (see chapter 2.4). Therefore, it is worth summarising the key findings of the diagnosis of stakeholder needs in the two tables below.

Tool 3: Summary Of The Diagnosis Of The Project Stakeholders’ Evaluation Needs

 

Information on the expectations of individual stakeholders regarding the form of presentation and ways of using evaluation results will be useful in the planning phase of their dissemination (see Chapter 6.3.)

 

2.4.How to design and plan the evaluation

The information collected during the workshop with the stakeholders will be used to prepare the concept and plan of the evaluation. The concept of evaluation, i.e. an idea on how to carry it out, can be prepared in 3 steps.

Diagram 2: Evaluation Concept

The first and second steps include the following:

  • Subject of evaluation – what do you want to evaluate (e.g. which project or programme),
  • Scope of evaluation – what part of the project will be included in the evaluation, e.g. the entire project or selected elements – particular activities, effects,
  • Purpose(s) of the evaluation – what are you conducting it for, what will you use the evaluation findings for,
  • Type of evaluation – at what stage of the project implementation will you conduct the evaluation; before the commencement of project activities (ex-ante evaluation), during their implementation (mid-term or on-going evaluation), after completing the project (ex-post evaluation),
  • Evaluation criteria – features indicating in what respect the project is being evaluated (e.g. relevance, effectiveness, efficiency, utility, impact, sustainability),
  • Evaluation questions – generally formulated questions regarding issues that are important in terms of assessing the value and quality of the evaluated project,
  • Evaluator – who will perform the evaluation, e.g. a team implementing the project (self-evaluation), an evaluation specialist employed by the organisation implementing the project (internal evaluation) or an external entity contracted by it (external evaluation)*.

*The strengths and weaknesses of different types of evaluation selected due to the location of the evaluator are discussed in the online course (Module 2).

You can present this information in a table showing your evaluation concept. An example of such a table and its application to a specific project are presented below.

Tool 4: Evaluation Concept Table

 

The third stage of developing an evaluation concept requires knowledge of the various research methods and tools presented in Chapter III – Data Collection. For this reason, part of the evaluation planning related to the methodology of collecting data for evaluation is presented in Section 3.3 (an example of this stage of evaluation design is presented in Tool 6).

Information on the availability of the necessary data, as well as the possibility of obtaining support from respective stakeholders, will be used when planning the evaluation process and estimating the resources necessary to carry it out. The evaluation plan should include such elements as: its schedule (with respective stages), resources necessary to conduct the evaluation (human, time, financial, information), as well as the planned form(s) of the evaluation report.

You can present this information in an evaluation planning table. An example of such a table together with how it is applied to a specific project is presented below.

Tool 5: Evaluation Planning Table

 

As you can see in the table above, information is one of the key assets that must be provided to conduct evaluation and there are plenty of data sources which can be useful for this purpose. In the context of youth employment projects one of the most important areas of the progress intended in the projects are general and vocational competences. The default source of the information on the initial and final level of such skills among project beneficiaries should be the trainers of these competences. Therefore, you should cooperate with the trainers on gathering and using the data concerning the competences level before and after the training.The measurement should use multilateral perspectives on the skills of trainees (the trainer’s perspective, self-assessment of the trainee and psychometric test) and be coherent and relevant to the content of the training. You can find an example of such tools sets in the attachments of this Toolkit. It concerns 8 key competences “needed for personal fulfilment and development, active citizenship, social inclusion and employment” mentioned in Recommendation 2006/962/EC of the European Parliament and of the Council on key competences for lifelong learning*.


*The Recommendation 2006/962/EC of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning refers to the following skills: 1) communication in the mother tongue, 2) communication in foreign languages, 3) mathematical competence and basic competences in science and technology, 4) digital competence; 5) learning to learn, 6) social and civic competences, 7) sense of initiative and entrepreneurship, 8) cultural awareness and expression.

 

2.5. How to design impact evaluation

The key distinguishing feature of impact evaluation is the fact that the assessment of project effects takes into account not only the impact of activities carried out in the project and the outputs produced but also the influence of external (non-project) factors. To evaluate the real (net) impact of the project it is necessary to plan and conduct the evaluation in a way that makes it possible to determine if the implementation of the project caused the intended exchange, and to what extent it was influenced by non-project factors.

Conducting an impact evaluation allows you to collect various types of information that are very useful for project development:

1) data on the actual impact of the project on achieving the expected change is the key information for deciding whether to repeat, duplicate, improve or discontinue the project because:

a) non-project factors could have contributed to the change intended in the project, so that the (net) impact of the evaluated project may be lower than indicated by the difference between the final value of the outcome indicator and its baseline value (measurement at the beginning of the project).

b) external factors could counteract the change expected in the project, so that the (net) impact of the evaluated project may be greater than the difference between the final and the baseline value of the outcome indicator,

2) information on the diversity and mechanisms of the impact of individual elements of the project on achieving the expected change is very helpful in improving the project,

3) identifying major external factors and the mechanisms of their impact on the intended change can be used to modify project activities so that they better concur with the processes supporting the change and better cope with opposing factors.

Depending on which of these issues is a priority in the evaluation of a particular project, but also depending on the feasibility of obtaining relevant data, different models (design) of impact evaluation are used along with data collection methods adapted to them.

Table 1. Different design approaches for impact evaluation.

Source: Emily Woodhouse, Emiel de Lange, Eleanor J Milner-Gulland. Evaluating the impacts of conservation interventions on human wellbeing: Guidance for practitioners.

Experimental and quasi-experimental evaluation designs are used to determine what portion of the intended change in a project can be attributed to the project activities (net impact). The measure of the impact of project activities is the difference between the measurement of the indicator before and after the end of the project in the group of its recipients (change in the test group, participating in project activities) after adjusting it for the impact of non-project factors. The impact of non-project factors is estimated on the basis of measuring the change of outcome indicator in a group of people who did not participate in the project and are as similar as possible to the project recipients.

  • In experimental designs (called also RCT – Random Controlled Trials), people are randomly assigned to the group of beneficiaries of the project (test group) or to the group not covered by the project (control group). Random selection to both groups helps to ensure that the two groups do not differ from each other*. Thus, changes in the measured indicators in the control group can be attributed only to external factors, and in the test group – to the combined influence of external factors and the project’s activities.
  • In quasi-experimental designs, there is no random selection of groups. For a test group which took part in the evaluated project, a control group is selected using non-random methods, but still providing it is as similar to the test group as is possible and performs similar functions as the control group in experimental models.

*When the test group or control group is small, structured random selection should be used (instead of simple random selection) to make sure that the two groups have similar structure according to features which can affect the intended outcome of the project (e.g. the structure of educational attainment level should be similar in the control and test groups otherwise the more educated group can make better progress in achieving skills which are to be developed in the project under evaluation).

 

In order to apply experimental or quasi-experimental designs, evaluation activities must be coordinated with the evaluated project activities and therefore need to be planned before they are implemented. For example, when you expect surplus candidates for project beneficiaries or when the project will be implemented in several editions and you can organise joint recruitment, you can do the group assignment using random sampling. This way you can get a randomly selected test group (to be immediately involved in the project activities) and the control group (the people not selected for the current edition of the project). Just after selecting the groups the baseline measurement should be conducted (and final measurement in both groups after the project has been completed).

If the beneficiaries of your project are chosen by an external institution (e.g. Labour Office), it is also worth checking what selection procedure is used there. If this procedure gives the opportunity to select a control group or comparison group in which the project outcome indicator can be measured, verify it and plan the measurement in this group at more or less the same time as it is carried out in the evaluated project.

An important aspect of impact evaluation is the control of what is known as the spillover effect, which is the spread of the impact of project activities outside the test group, in particular to people in the control or comparison group. The risk of the spillover effect is greater the more contact the recipients of the evaluated project have with the people from the control or comparative group. Another aspect influencing the scope of the spillover effect is the level of demand on solutions provided by the evaluated project.

Planning and interpreting an impact-focused evaluation requires the use of the project theory to examine the consistency of the evaluation findings with the project logic (of change) and to verify the impact of the alternative factors. Examining the consistency of facts with the project logic focuses on identifying evidence confirming a cause-and-effect relationship as well as data, which confirm these relationships. In this approach, it is crucial to plan as early as possible what kind of data should be collected during the project in order to verify:

– the cause-and-effect relationship between activities, outputs, intermediate and final effects (outcomes, impacts) that make up the project logic of change,

– achievement of successive stages in the cause-effect chain of intermediate effects leading to the outcomes measured by the final indicator (the milestones).

Assessment of the impact of alternative factors is based on similar planning and verification of factors of change other than project activities expected as the results of the project.

If the evaluated project is a part of a larger programme carried out in different locations or by different organisations, this may provide an opportunity to obtain comparative data that will be used in the impact evaluation based on case study analyses. To use the case-based evaluation design, you should collect information not only about the outcome indicator that you measure in the evaluated project, but also about all important factors that may affect the value of this indicator. The set of such factors should be determined on the basis of the project theory, taking into account the different elements which may influence the intended change.

It is worth remembering that in this model it is possible to use information about projects implemented in the past. Regardless of where the analysed cases come from, it is important to obtain a predetermined set of information from them. The final analysis is based on a table that summarises the data from all analysed cases concerning the occurrence of the factors that may affect the intended change of the outcome indicator and, of course, the outcome indicator itself.

Table for summing up the findings from case studies analysis – practical example

In the table above you can see summarised information on 4 cases where the outcome (having a job or being in education or training 1 year after the project completion) was monitored against three factors. Two of them were different project stimuli (extensive training in social competences and vocational training) while the third one was external – supported employment for six months right after the end of the project). The analysis showed that it was the extensive training in social competences which caused the intended outcome.

 

Participatory design is an underrated but popular model of impact-focused evaluation. It does not guarantee as much reliability and precision as experimental or quasi-experimental designs, nor is it as convincing as a strict case study analysis but it can still be useful, especially in small projects. In participatory design, you refer to the perceptions of the participants in the evaluated project and, on the basis of the data obtained from them, you evaluate the impact of the project. Thus, the methodology of collecting data is of great importance because the project beneficiaries tend to adjust their opinions to what they think the researcher might want to hear, especially if data collection is conducted by someone from the project staff.

  • One of the participatory evaluation designs is called Reflexive counterfactuals. Its advantage is that it can be used after the end of the project. On the other hand, it is exposed to the previously described risks, such as influence from the researcher. As part of reflexive counterfactuals, the beneficiaries are asked to compare their current situation with their situation before they participated in the project and to describe what has changed for better and for worse. Then, they rate the relevant importance of particular benefits and costs to select the ones which were considered to be the most important. Using different research techniques, it is also possible to ask about the causes of particular changes and find out which of them were associated with the project.
  • Another technique for participatory impact analysis is MSC (Most Significant Changes). It is based on the generation and in-depth analysis of the most significant stories of change in the lives of project beneficiaries. These stories of change were observed and noted by various project stakeholders (including the beneficiaries themselves). The properties of this research technique allow it to be used after the end of the project.

Finally, the possibility of conducting an impact evaluation based on statistical methods should also be mentioned. The basis here is the analysis of the correlation (coexistence) of the outcome indicator and the activities undertaken in the evaluated project*. Such analyses are performed on large data sets, which makes this type of evaluation of little use for organisations running projects for a relatively small group of recipients**.

More information on impact-focused evaluation can be found in the online course (Module 3).


* In such analyses, the basic method of analysis is regression, in which the strength of the relationship between the result indicator and the indicators of actions carried out within the evaluated project is examined, with statistical control (exclusion) of the impact of confounding factors.
** The problem that hinders the use of statistical methods of impact evaluation by small and medium-sized organisations is, apart from the scale of the projects, the need to use advanced statistical software and qualified analysts.

III. DATA COLLECTION

 

3.1. What are the major types of evaluation research methods?

In order to estimate the value and quality of the project in relation to the chosen criteria and answer the evaluation questions, you should correctly collect the necessary information. Research methods and tools serve this purpose. Research methods mean a specific way of collecting information – qualitative or quantitative – with the use of specially developed tools, such as interview scenarios, observation sheets or questionnaires. Let’s look the differences between these methods and research tools.

Qualitative methods enable the collection of data in an in-depth and flexible manner, but they do not allow you to assess the scale of the studied phenomena as these methods cover only a small number of people from the groups involved in the project (e.g. selected recipients). On the contrary, quantitative methods are used in the case of large groups that consist of several dozen people. In the case of more numerous groups (e.g. more than 400-500 people) these methods enable the generalisation of conclusions drawn from the survey of a representative, randomly selected sample of people for the entire population, i.e. the community that is of interest to the researcher, including people who did not participate directly in the particular study. This generalisation must be carried out in a specific way that will ensure that the sample of people subjected to the study is representative, i.e. maximum similarity in various socio-demographic characteristics to the population from which they were selected.

Comparison Of Qualitative And Quantitative Methods Of Evaluation Research

 

Both of these types of methods have some strengths and weaknesses, therefore you should always use both qualitative and quantitative methods in the evaluation study. This approach is in line with the triangulation principle aimed at ensuring the high quality of the information collected. Triangulation means using various sources of information, types of collected data and analytical techniques, theories explaining the identified relationships / mechanisms, as well as people conducting the evaluation (whose competences should complement each other). Providing diversity of this elements triangulation enables:

  • comprehensive knowledge and understanding of the studied object,
  • taking into account various points of view and aspects of the phenomenon studied,
  • supplementing and deepening the collected data,
  • verification of collected information,
  • increasing the objectivity of formulated conclusions.

 

3.2. What methods and tools are typically used in evaluation research?

To facilitate the choice of methods and tools most appropriate for a particular evaluation, below are the characteristics of the most popular of them:

  1. Qualitative methods
    • desk research,
    • individual in-depth interviews (IDI),
    • focus group interviews (FGI),
    • observation,
    • case study.
  1. Quantitative methods (surveys)
    • survey conducted without the participation of an interviewer – self-administered paper surveys, computer-aided web interview / online survey (CAWI), central location (simultaneously surveying all respondents),
    • questionnaire interviews conducted with the support of a pollster – paper and pen interview (PAPI), computer-assisted personal interview (CAPI) and computer-aided telephone interview (CATI).
  1. Active / workshop methods (mixed, i.e. qualitative and quantitative).

 

3.2.1. DESK RESEARCH

In the case of desk research existing data is used, i.e. data that was generated regardless of the actions taken by the evaluator.

The existing data includes internal data (generated for the needs of the evaluated project) and external data:

  • Internal data is information created during the preparation and implementation of project activities (e.g. project application, training scenarios, attendance lists, contracts, photos, videos and materials about the project posted on the website, posts and responses on social media). In the case of training projects for young people looking for a job, these may also be the results of measuring the competences of the beneficiaries at the beginning and at the end of participation in the training (knowledge tests, skills tests, attitudes tests, etc.)
  • External data is information that may relate to the studied phenomenon, processes or target group, but has been collected independently of the evaluated project (e.g. statistics, data repositories, reports, articles, books, videos, and other materials available on the Internet). In the case of the evaluation of employment projects, it is worth using information on similar projects, as well as data available to labour offices, social insurance institutions, national statistical offices, regarding the employment of young people living in a particular town.

Documentation analysis is the basic method of collecting information on a given project, also providing some knowledge about the needs of its recipients and the context of the evaluated project.

 

CONDITIONS OF APPLICATION:

Public institutions provide administrative data in accordance with the principle of transparency in the operation of public institutions and civic participation (open government concept). However, it is important to assess the data reliability and accuracy based on the methodological information provided in the source documentation.

 

ADVANTAGES:

  • accessibility (especially regarding information available on the internet),
  • large variety (you can use any data / materials related to the conducted evaluation),
  • no costs – most documents and data are available free of charge,
  • no evaluator’s effect on data in the case of external data.

DISADVANTAGES:

  • different levels of data credibility – you need to take into account the credibility of the source and the context of data acquisition (under what conditions, who collected and analysed the data and why),
  • restrictions on the access and use of internal information due to the protection of personal data, copyright and property rights.


3.2.2. INDIVIDUAL IN-DEPTH INTERVIEW (IDI)

An individual interview takes the form of a direct conversation between the interviewer and the respondent, usually conducted using a scenario. The interview allows you to obtain extensive, insightful and in-depth information, get to know opinions, experiences, interpretations and motives of the interviewee’s behaviour, examine facts from the interviewee’s perspective, as well as gaining a better understanding of their views.

IMPORTANT TIP

The language of the interview should be adapted to the respondent. In interviews (especially with young people) use simple language and avoid specialistic vocabulary (e.g. project jargon), that may cause misunderstanding of the questions asked and intimidate the interviewees.

 

CONDITIONS OF APPLICATION: Individual interviews should be conducted in quiet rooms that guarantee discretion. Interview recording is a common practice, but the respondent does not always agree – in such cases the researcher should take notes during the interview and complete them immediately after the meeting. It is recommended that the interview be conducted by an external expert to avoid situations in which the interviewee feels uncomfortable expressing honest opinions.

 

ADVANTAGES:

  • the possibility to discuss complex and detailed issues,
  • better understanding of the interviewee’s point of view (“getting into his/her shoes”),
  • getting to know facts in the situational context,
  • flexibility – the possibility to adapt to the interviewee and to ask additional questions not included in the scenario.

DISADVANTAGES:

  • unwillingness of some interviewees to express honest opinions due to lack of anonymity,
  • the impact of the interviewee’s personality traits on the findings obtained, e.g. difficulty in obtaining information from people who are taciturn, shy or introvert.

RESEARCH TOOL: the interview may be supported by an interview scenario, containing a list of questions or issues to be discussed. The interviewer can change the order of questions or add some questions during an interview if it is needed to better understand the issue.

Examle Of IDI Scenario For The Project Team

Individual IDI Scenario

 

3.2.3. FOCUS GROUP INTERVIEW (FGI)

A focus group is a conversation between about 6-8 people supported by a moderator who gives the group issues for discussion and facilitates its course. FGI participants are selected according to specific assumptions set by the researcher and their knowledge of studied issues.

IMPORTANT

In the case of young people, the discussion should be divided into shorter forms, involving all the participants, so that they do not get bored too quickly. It is worth using multimedia tools, elements of gamification or non-standard solutions, e.g. a paper cube with questions, thrown by the participants themselves. It is helpful to write down a group’s opinions on a flipchart and record the group discussion.

 

CONDITIONS OF APPLICATION: The basic condition for the success of a group interview is correctly selecting people with specific information that they are ready to share. It is important to guarantee that the participants are comfortable by organising the interview in a quiet room of the right size with comfortable seating, a large oval / square table and a flip chart.

 

ADVANTAGES:

  • learning about different points of view, taking into account different opinions,
  • mutual verification and supplementation of information about the facts discussed by different persons,
  • the opportunity to observe interactions between participants,
  • obtaining relevant information from several people in a relatively short time.

DISADVANTAGES:

  • dynamics of group processes, including pressure on group consensus / cohesion, may lead to minority opinions not being disclosed, e.g. due to the group being dominated by a natural peer group leader,
  • risk of transferring to group conflicts or bad interpersonal relations, reducing the effectiveness of the research and the reliability of the findings obtained,
  • organisational difficulties (the need to gather a group of people at a particular place and time and to provide a properly equipped room)*.

RESEARCH TOOL: the tool used by the moderator for this method is an FGI scenario, which includes the principles of group discussion, specific issues / questions and guidelines regarding various forms of activity in which the moderator is to involve the participants.

FGI Scenario


*Both IDIs and FGIs can be conducted by remote means using online communicators.

 

3.2.4. OBSERVATION

This method is based on careful observation and listening to the studied objects and situations (phenomena, events). The observation may be participant, partially participant or non-participant, depending on the degree of involvement of the researcher, who may act as an active participant in the events he or she observes or as an external, uninvolved observer. The observation can be carried out in an overt, partially overt or covert way*, i.e. the participants of the event may know that they are being watched or selected persons (e.g. trainer and / or training organiser) or only an observer know about it.

 

CONDITIONS OF APPLICATION: if the observation is non-participant, the observer should not come into contact / relations with the people being observed as this carries the risk of affecting the course of the observed events and behaviours.

 

ADVANTAGES:

  • providing information about a particular event / process during its course,
  • reporting facts without their interpretation by the participants (examination of actual behaviour, not declarations,
  • facilitating the interpretation of investigated events,
  • the opportunity to learn about phenomena usually hidden or unnoticeable or that people are reluctant to discuss.

DISADVANTAGES:

  • possible influence of the researcher on the course of events (the respondents’ awareness that they are being observed may change their behaviour),
  • limited scope of observation range, difficulty in accessing all events,
  • the risk of subjectivity (the researcher may assume some stereotypes, perceive and interpret events for the benefit of the observed group).

RESEARCH TOOL: The observation may be conducted using a research tool which is the observation sheet. Its use focuses the observer’s attention on selected issues and enables the recording of important information (e.g., the behaviour of people participating in the observed events), which may be not only qualitative, but also quantitative (the checklist).

Training Observation Sheet


* With regard to evaluation studies, we do not recommend covert observation, i.e. one that is not known to the people who are its subject.

 

3.2.5. CASE STUDY

This is an in-depth analysis of the studied issue using information from different sources and collected by various methods. Its findings can be presented in a narrative form. The analysed “case” could be a person, group of people, specific activities, a project or a group of projects.

The case study is used to:

  • get to know thoroughly and understand a particular phenomenon along with its context, causes and consequences,
  • illustrate a specific issue using a realistic example with a detailed description,
  • generate hypotheses for further research,
  • present and analyse best / worst practices to show what is worth doing and what should not be done.

CONDITIONS OF APPLICATION: This method requires time to collect and analyse various data regarding the phenomenon / object being studied, its context, processes, and mechanisms. Case studies are best used as a complementary method to other research methods.

 

ADVANTAGES:

  • is a source of comprehensive information on a given topic,
  • uses different points of view, which gives the description and analysis a wider perspective,
  • takes into account the context of the phenomena studied.

DISADVANTAGES:

  • usually requires the use of various sources of information, sometimes difficult to access,
  • it requires a lot of work and is time-consuming,
  • provides incomplete data results with low credibility of the described case.

 

3.2.6. SURVEYS CONDUCTED BY INTERVIEWERS

Quantitative methods are a standardised measurement method. Standardisation enables the collection and counting of quantitative data in a unified way, and also enables their statistical analysis. Standardisation covers:

  • Research tool (interview questionnaire) – the order, content and form of questions put to respondents,
  • The manner of recording respondents’ responses by selecting one option (on the scale) or several options from the “cafeteria” (a set of ready answers),
  • Behaviour of interviewers (pollsters) who are obliged to follow the instructions contained in the questionnaire during the interview.

Respondents’ opinions are transformed into numbers and saved in the database. Then, this information is analysed using statistical methods.

Questionnaire interviews are conducted by trained pollsters who read the respondents’ questions from the questionnaire and write down the answers that were obtained. There are the following techniques for this type of research:

  • Paper and Pencil Interview – (PAPI),
  • Computer-Assisted Personal Interview – (CAPI),
  • Computer-Aided Telephone Interview (CATI).

 

3.2.6.1. Paper And Pencil Interview (PAPI) and Computer-Assisted Personal Interview (CAPI)

Both of these techniques are field-based and are implemented in direct contact of the respondent with the pollster using a paper (PAPI) or electronic version of the interview questionnaire displayed on a laptop or tablet (CAPI). The pollsters read out the questions included in the questionnaire and then mark the answers given by the respondent.

 

CONDITIONS OF APPLICATION: a wide range of topics and a direct (F2F) meeting between the interviewer and the respondent is required. The best place for the interview is a place isolated from noise and the presence of third parties (in home / work conditions, make sure that bystanders, such as family members or colleagues, do not influence the respondents’ answers).

 

ADVANTAGES:

  • personal, close contact with respondents (the possibility to observe non-verbal signals, respond to misunderstanding of the question or tiredness of the respondent),
  • greater readiness of respondents for a longer interview and more difficult questions than during CATI,
  • with CAPI data is automatically saved during the interview.

DISADVANTAGES:

  • higher costs, including time and cost of travel and arranging a personal meeting with the respondent,
  • lack of a sense of anonymity of the respondent,
  • uncontrolled influence of the pollsters on the respondent’s answers (the interviewer’s effect*)
  • With PAPI the interviewer must manually enter the data from the questionnaire into the database after the interview, which is time-consuming, adds costs, and involves the risk of mistakes.

* This is the influence that the interviewer exerts on the respondent during the survey. The respondent unconsciously interprets the interviewer’s social characteristics (e.g. gender, age), assuming what is expected of him/her. The interviewer may also unknowingly send signals to the respondent suggesting the “right” answers.

 

3.2.6.2. Computer-Assisted Telephone Interview (CATI)

This type of interview is carried out by phone. The interviewer reads the questions displayed on the computer screen, and after receiving the answers marks them in the electronic questionnaire on his/her computer.

 

CONDITIONS OF APPLICATION: studying established opinions and attitudes, with the use of questions that do not require longer reflection due to the short duration of this interview (max. 10-15 minutes), as well as a specific channel of transmission and reception of information (no possibility of reading it several times at own pace).

 

ADVANTAGES:

  • shorter time and lower cost of reaching the respondent compared to face-to-face interviews (PAPI, CAPI),
  • time flexibility (the possibility to adjust the interview time to the respondent’s preferences, to stop the interview and continue it at a convenient time for the respondent),
  • easy management and control of pollsters’ work,
  • automatic saving (coding) of data during the interview.

DISADVANTAGES:

  • possible difficulty in obtaining respondents’ phone numbers (due to the lack of access and / or protection of personal data), and in the case of employers, no personalised contacts (having only the reception / headquarters phone numbers),
  • interview time limited to 10-15 minutes (due to shaky concentration and short duration of the respondents’ involvement),
  • the tendency of the respondents to choose extreme answers, or the beginning and end points on the scale (resulting from a specific channel of information transfer which enhances the ‘priority effect’ and the ‘freshness effect’).

 

3.2.7. SELF-ADMINISTERED SURVEYS

In self-administered surveys, the respondents read and mark the answers in the questionnaire on their own (without the pollsters’ participation).

 

CONDITIONS OF APPLICATION: these surveys can be carried out as a paper or online questionnaire (i.e. Computer-Assisted Web Interview – CAWI). In the case of the latter, respondents receive a link to the website with the questionnaire which they can complete on a computer, tablet or smartphone. After answering, the data is sent to the server where it is automatically saved in the database.

A very effective method of collecting quantitative data is a central location, which relies on questionnaires being filled in by people who are at the same time in one room, e.g. after completion of a training, workshop or conference. It is necessary to ensure that the respondents fill in the questionnaires themselves (without support from other people).

 

ADVANTAGES:

  • short time it takes to obtain information (especially in the case of a central location),
  • lower cost compared to questionnaire interviews conducted by pollsters,
  • sense of anonymity in people completing the survey,
  • no interviewer’s effect .

DISADVANTAGES:

  • respondents’ motivation to complete the questionnaire may decrease with no interviewer presence,
  • lack of control over the process of completing the survey*,
  • risk of consulting responses with other people**.

PRACTICAL TIP

The survey questionnaire must:

  • be short, easy, visually attractive to encourage a response,
  • have all necessary explanations, which in other methods are given by the interviewer,
  • have clear instructions (paper version) or algorithms (electronic) leading the respondent to the relevant questions (based on previous answers, irrelevant questions are filtered and omitted).

 

Questionnaire For Training Participants


* Instead of the right respondent, the survey may be completed by another person, which disrupts the representativeness of the sample.
** Especially in the case of a central location conducted without the researcher’s supervision.

 

3.2.8. ACTIVE / WORKSHOP METHODS OF GROUP WORK WITH YOUNG PEOPLE

 

Below we present additional active methods of collecting data (mainly qualitative), which can be particularly useful in group work with young people, because these methods are engaging, they integrate the team, facilitate cooperation and support the development of soft skills.

Active methods are workshop methods of collecting information that can complement the “classic” methods of evaluation research. They allow you to get quick feedback on a particular action, learn about the ratings, feelings and impressions of the participants as well as develop recommendations. These methods are worth using during workshops, training or conferences, in order to make the meeting more attractive, get to know the participants and better adapt the project activities to their needs.

 

ADVANTAGES:

  • speed – you receive instant feedback during the classes / meetings,
  • casual atmosphere,
  • the projective nature of tasks / questions makes it easier to formulate critical opinions and propose new solutions,
  • possibility to jointly collect qualitative and quantitative data,
  • stimulating self-reflection,
  • a positive impact on the well-being of participants (satisfying the need for expression, acceptance, integration).

DISADVANTAGES:

  • you cannot generalise the obtained opinions to a wider community (not participating in the meeting),
  • the need for an experienced trainer / moderator to moderate / facilitate,
  • the lack of anonymity of the participants in the case of group reporting and discussion (threat to mental well-being and group relations for people who are particularly vulnerable or have a weak position in the group).

Below you can find examples of active methods implemented in the form of a workshop.

 

CLOTHESLINE

The purpose of this tool is to get to know the expectations of the project audience. It is a visual method of collecting qualitative data.

Each participant receives drawings with clothes (e.g. shirt, underwear, trousers, socks), which symbolise the type of expectations they have towards the project – they may be, for example, hopes, fears, needs, suggestions, etc. Participants are given sufficient time to reflect and complete individual drawings / garments. After writing down their ideas, each of them “hangs their clothes” on a string hung or drawn in the room. Participants can read their expectations aloud and look at others’ “laundry”.

 

TELEGRAM

This tool allows you to quickly summarise part of the meeting (workshop, training) to learn about the mood in the group.

The participants are asked to think about a particular fragment of the classes and describe their reflections with three words: positive, negative and summative (e.g. intense – tiredness – satisfaction). Each person reads their words, which allows for a joint summary of the activities (you can write them down on post-its and stick them on a flipchart, etc.).

 

HANDS

The purpose of this tool is to find out opinions on selected aspects of the project or part of it (e.g. training, internship), as well as to summarise the course and effects of the classes. People participating in the workshop receive sheets of paper on which to draw their hands. Each of the fingers is assigned one assessment category, e.g.:

  • On the thumb – what was the strongest / best side of the training / project,
  • On the index finger – what I will tell my friends about,
  • On the middle finger – what was the weakest point of the training / project,
  • On the ring finger – what I would like to change (element needing improvement),
  • On the little finger – what I have learned or found out.

Participants enter their opinions on each of the fingers in accordance with the above categories. The exercise can be used to find out about the opinions of individuals and / or for group discussion.

 

EVALUATION ROSE

This method is used to gather feedback on many aspects of a project / activity at the same time. It is a visual method that allows you to collect quantitative data – assessments of various aspects of the assessed object using a joint scale.

Participants receive cards with an “evaluation rose” drawn. The drawing is inspired by the “wind rose” – instead of the directions of the world, it presents various aspects of the evaluated object (e.g. the usefulness of the training, how attractive the method of conveying the content is, the appropriate amount of time spent on training). Divide the axes into sections and assign to them selected values (e.g. scale 1-5, where 1 is the weakest grade and 5 – the best). Participants are asked to indicate their views on each axis of the “evaluation rose”. Then you can combine the points and get a visually attractive picture of your opinions (the final effect resembles a radar chart).

 

TALKING WALL

The purpose of this method is to gather opinions on the value of a particular project activity or the entire project. Thanks to its application, you can obtain qualitative data (types of opinions) and quantitative data (how many people share a particular opinion).

Hang five large sheets of paper on the wall. On each of them, put a question about the conducted activities, e.g.:

  • Sheet 1: What new things did you learn during the training?
  • Sheet 2: How will you use the knowledge acquired during the training?
  • Sheet 3: What did you like the most about the training?
  • Sheet 4: What did you like least about the training?
  • Sheet 5: What would you change in this training?

Participants write down their answers on each sheet or – if the opinion is already on them – add a plus / dot next to it. At the end, the facilitator summarises the entries and encourages the group to discuss them and develop their recommendations. This form of collecting opinions encourages more openness, participants gain a sense of agency and overcome reluctance to speaking in public.

 

RUBBISH BIN AND SUITCASE

With this method, you can get a summary of training or other project activity. It allows you to collect information on elements that were useful, redundant or considered missing for the participants.

Draw a suitcase, rubbish bin and sack on the blackboard / flipchart. Each of the figures symbolises one category of opinion about the evaluated activity:

  • Suitcase: “What do I take with me from the training?” (what will be useful to me, what will I use in the future)
  • Rubbish bin: “What was unnecessary during the training?” (what is not useful to me, what was redundant),
  • Sack: “What was missing?” (what should be added to the next training).

Then you can ask the participants to speak or write down their opinions on sticky notes or directly on the pictures on a flipchart.

 

PRACTICAL TIPS FOR CONDUCTING GROUP ACTIVITIES

It is good for the participants to sit in a circle so that everyone can see each other. To increase their involvement, you can propose that they themselves indicate the next person to talk, e.g. by throwing a ball (this solution can be used provided that no one in the group is discriminated against). Oral statements should be noted down – this can be done by the person conducting the classes while they are taking place (e.g. on the blackboard, flipchart) or by their assistant.

 

3.3. How to choose appropriate research methods

Research methods must fit well with the evaluation concept and plan. To make the right choice, consider whether the methods are relevant to:

  • The purpose, subject, scope and type of evaluation, as well as the criteria and evaluation questions – will these methods provide you with the information necessary to answer your evaluation questions?
  • The data sources from which you plan to obtain information – will it be appropriate to provide information on the groups that will take part in the evaluation research?
  • The characteristic of the interviewees / respondents – do the methods take into account group size, their perceptive capabilities, communication abilities, health condition, etc.?
  • The circumstances of the data collection – will all the necessary data and interviewees / respondents be available at a particular moment? Will the chosen method suit the place of data collection?
  • The resources you have access to? – does the method require availability of qualified or independent researchers and other resources (organisational, technical, financial and time)? Will you be able to use the method on your own? Do your resources make you able to use it?

Knowledge of research methods (quantitative and qualitative) and related tools will help in preparing the second part of the evaluation concept (see chapter 2.4, tool 4), which will be supplemented with methodological issues. This element enables you to gather information to answer evaluation questions.

Tool 6: Logic Matrix Of The Evaluation Research

 

3.4. How to design research tools

A common mistake is to start an evaluation by creating research tools, e.g. a questionnaire for project recipients. You must remember that you will not be able to choose the right research methods or prepare the right measurement tools (e.g. scenarios, questionnaires, observation sheets) in isolation / detached from the overall concept of evaluation. Therefore, start constructing research tools after determining:

    • The subject, scope and purpose of the evaluation,
    • Evaluation criteria and questions,
    • Studied groups of people and research methods.

Without referring to the above elements, you are not able to create correct research tools, because you may include questions that are unrelated to the purpose of the research, making it impossible to answer evaluation questions and respond to evaluation criteria. “Bad” tools contain useless questions, are overloaded or incomplete, do not provide relevant information and do not allow for the formulation of meaningful recommendations.

The questions included in the research tools are a particularisation of the evaluation questions. Remember that these questions evaluators ask themselves, not the respondents! These two types of questions should not be confused as they are formulated in languages adjusted to the needs of:

  • Evaluators / evaluation stakeholders → evaluation questions,
  • Studied groups of persons (interviewees, respondents)→ questions in research tools.

If you are not sure whether a particular question should be put to the interviewees / respondents, consider whether they will be able to answer it, and the information obtained will allow you to answer the evaluation questions and formulate useful recommendations.

 

HOW TO ASK QUESTIONS

  • The number of questions included in the tools should be appropriate to the purpose and duration of the research.
  • Research tools should have a transparent structure, with the main issues identified (e.g. “reasons for joining the project”, “assessment of different types of support”, “effects of participation in the project”). Topics should be grouped thematically (e.g. organisational issues).
  • Questions should be asked in a specific order. Put preliminary questions (relatively easy) at the beginning of your tool. They should be followed by introductory questions in the subject (not very difficult), then main questions (key for the purpose of the research). Put the most difficult questions in the middle of the tool. Finally, ask summary and closing questions.
  • Questions should be asked in a logical order that cannot surprise or confuse the research participants. Each question should follow on from the previous one or – in the case of an interview – refer to the respondent’s statements.
  • The language of an interview should be easy to understand: use as short sentences as possible, use a language close to the research participants – without foreign words, specialised terminology, jargon, abbreviations.
  • Questions should be formulated precisely – e.g. there should be no doubt what period of time they relate to (don’t ask “whether recently …”, but “whether in the last week / month / year …”)
  • Do not ask for several issues in one question (“what are the strengths and weaknesses of the project?”) and do not use negative questions (“shouldn’t you …”, “don’t you prefer …”). Each of these errors makes it difficult to understand the questions and interpret the answers.
  • Questions and proposed answers must not be sensitive to the research participants – they cannot lead to the disclosure of traumatic experiences, declaration of behaviour or beliefs contrary to the law or morality. When anonymity is not guaranteed, do not ask about property status, family matters or health issues.
  • Do not ask questions suggesting an answer – do not present any of the options as being in accordance with the rule of law or morality, do not refer to the authorities or the opinion of the majority.

The differences between quantitative and qualitative research tools, the structure / construction of scenarios and questionnaires and the most common mistakes in their design are discussed in the online course.

IV. CONSIDERATIONS WHEN EVALUATING PROJECTS AIMED AT YOUNG PEOPLE AGED 15-24

 

When undertaking the evaluation of projects aimed
at young people aged 15-24, you should take into account that people of that age are different from adults, mostly because of their legal situation, living and technological conditions, and psychological and social needs related to intensive development processes on the verge of adulthood.

 

4.1. What are the standards of conducting research on young people?

The United Nations Convention on the Rights of the Child and many additional provisions in individual countries guarantee special legal protection for persons under the age of 18. According to the law, a person under the age of 18 is a child. Although in most countries one acquires certain rights at the age of 15 (for example the right to choose one’s school, the right to take up work), a minor’s participation in YEEAs projects as well as in various types of research requires the consent of their parent or legal guardian.

 

4.1.1. Consent for a minor’s participation in evaluation research

  1. Consent for participation in evaluation studies from both the minor and his/her parent or legal guardian must refer to the specific research (name of the research or evaluated project and the entity or entities conducting it).
  2. The person giving consent for a minor’s participation in the research should receive all the necessary information, such as:
    1. The purpose of the research and how the findings will be used,
    2. The scope and method of collecting information to be obtained from the research participant, including whether the research requires multiple contact with the participant, especially a long time after the first round of research,
    3. Assurance of anonymity and protection of confidentiality of data obtained about the participant in the research,
    4. Information about the right to refuse to participate in the research and to withdraw from participation at any stage.
  3. It should also be remembered that in EU countries it is necessary to obtain consent for the processing and storage of personal data.
  4. If it is planned to use sound and video recording devices – also explicit consent must be given.
  5. Examples of documents used to obtain consent for a minor’s participation in research are included in the Annexes (Annexes 1 and 2).

It is worth obtaining such consent at the beginning of the evaluated project because it can be obtained with more general consent for a minor’s participation in the project (e.g. in the same document).

 

4.1.2. Protection of minors in the ethical codes of professional researchers

The basic guidelines for conducting research among people under 18 are:

  • Obtaining informed consent (described above) from the minor and their legal guardian,
  • Providing a sense of security to those examined by the research staff (e.g. the researcher does not attempt to make first contact with minors without the presence of the adult responsible for the child (teacher, guardian, parent); the person collecting the information has documents confirming their status as a researcher; the training and experience of the people conducting the research guarantee the safety and the way of carrying out the research appropriate to the specificity of young people),
  • Ensuring that all the information provided, including the questions put to the interviewees / respondents, can be understood (it is helpful in this respect to test quantitative tools on a small scale before applying them and to discuss the tools with specialists),
  • Ensuring that the scope or method of obtaining information from young people will not directly cause any material or non-material harm, including harm related to mental well-being and social relations; this applies in particular to such issues as:
    • Sensitive issues that lower the sense of autonomy or self-esteem,
    • Relationships with their peer group and other important people.

If you have any doubts, it is worth consulting specialists.

  • Compliance with the general principles of social research, including in particular:
    • Guaranteeing the confidentiality of information obtained from the research participants both at the stage of data collection (no participation of other people apart from the researchers and the respondents during data processing (anonymisation/pseudonymisation), as well as in publishing the findings (collective presentation of quantitative data, pseudonymisation of qualitative data),
    • Ensuring the anonymity of the research participants,
    • Ensuring the safety and undisturbed work of the researchers.
  • Standards for conducting research on minors are included in the codes of ethics in force in the communities of professionals conducting social and market research.

4.2. How to adjust the methodology of evaluation research to a young person’s way of life?

 

4.2.1. Major activity – formal education

Studying is the dominant activity in the life of young people aged 15-24. For instance, in Poland, until the age of 18, participation in formal education is compulsory, although training in the form of “vocational preparation” combined with paid work is also allowed. However, the findings of the Labour Force Survey show that the vast majority of those aged 18-24 still participate in organised forms of education. Young people study full-time in schools or colleges, but often also part-time, attending courses or training. Also, many of the YEEAs activities are conducted in the form of group learning activities. Grouping the beneficiaries of the evaluated project in one place and time allows you to carry out various types of activities related to evaluation, primarily to collect data through observation, central location, focus group interviews, etc.

However, you should bear in mind that when conducting research in educational institutions, you should ensure there are appropriate conditions for collecting data, such as: an isolated room, dedicated time (respondents should not be under time pressure).

 

4.2.2. Weak position on the labour market

One of the basic elements of the situation of young people, which is also the main area of influence of YEEAs projects, is their situation on the labour market. In studies devoted to this subject, in relation to young people it should be taken into account that:

  • In the 15-24 age group, only about every third person performs any paid work (including free help for a family member’s paid work) – so you should never ask questions with the assumption that a particular person is working or has income from work,
  • Work by young people, especially those under the age of 18, occurs in highly diversified, often atypical forms, e.g. as free help in the paid work of a close family member, as a one-time job, occasional work, holiday work, part-time work, replacement, “trial” work, various types of internships, apprenticeships and vocational preparation, in which the proportion of study to work and earnings vary widely and may or may not be considered work, providing work in exchange for accommodation, food and “pocket money”, promoting products or services on social media in exchange for the goods or services received, voluntary work with various levels of covering own costs, work performed under various contracts, ranging from regular employment contracts to specific contracts, undeclared work such as tutoring, income for illegal activities.

When asking young people about work, you need to precisely define what kind of activity you consider to be work and / or what features are decisive for you (legality, type and amount of remuneration, time dimension, stability, linkage with educational obligations, legal form).

 

4.2.3. Increased mobility

People aged 15-24 change their place of residence much more often than older people. They also exhibit higher than average daily mobility. As a result, traditional methods of collecting quantitative data based on a home address in the case of young people do not work – a postal questionnaire is often sent to an address that no longer applies, the interviewer comes when no one is there.

Therefore, in the case of young people, it is particularly important to obtain their mobile contact details, such as a phone number or the name of an individual profile on a messaging app, and then base a data collection strategy using electronic tools on these contact details. The findings of studies using both ae postal questionnaire and the CAWI method show that the response rate in the case of the latter is much higher and it increases the lower the respondent’s age.

 

4.2.4. Dominance of smartphones in everyday communication

Young people are more willing than older people to use electronic technologies than paper. They are also much more efficient at this and are more willing to deal with all matters of everyday life using a smartphone than a computer. Therefore, in research among young people it is worth using electronic research tools, and best to adapt them to smartphones (one simple question per screen, simple and legible form, not too long a list of answers). One example of such an application that can be used for working with young people is Kahoot.

 

4.2.5. Busy and overstimulated life

A characteristic feature of modern youth is their openness to many stimuli delivered via smartphones, which young people never let out of their sight. Moreover, learning, developing one’s own interests, and above all social life, often result in stimulating how young people function by forgetting about unusual or less important obligations, such as filling out a questionnaire. To counteract this, it is important to regularly send messages reminding participants about the dates of scheduled interviews, their promises to complete a survey, etc.

 

4.2.6. Widespread use of social media

The widespread use of social media by young people, including their presence in numerous social media groups, is increasingly being used for research purposes. It is possible to find groups of young people from a particular locality or school, as well as those with specific musical and ideological interests, etc. After entering the group, the possibilities of recruiting research participants (e.g. to the comparative group) open up. You may consider asking individual group members a question as a researcher, or (if the group moderator agrees) publicly posting a link to the online survey or request for contact. It is better not to open a public discussion at the Internet group level as this prevents the research from being confidential, exposes the participants to being assessed by other group members, and the public nature of statements lowers their credibility.

Following the example of market research agencies, you could also consider establishing a special community group (MROC method – Market Research Online Communities), in which young project beneficiaries would agree to participate. However, such activities require a precise definition of the group’s goal. If the purpose is research – then it should be a short-term group (MROC), and during this period it should be professionally moderated, similarly to Focus Group Interviews (FGI).

 

4.2.7. Difficulties in reaching NEETs

Difficulties characteristic for research among young people intensify when the evaluated project is aimed at young people who are not studying or working, who are not covered by any form of education, support or institutional supervision that groups them (NEETs). Reaching young people who are in such a situation is a serious challenge, especially when you need data for comparisons with NEETs who participate in the project.

Often, the only solution to this type of problem is to compare groups participating in different projects from the same programme, or to compare the results obtained in the group covered by the project with the group of candidates who did not become its beneficiaries (taking into account the impact of the reasons for not qualifying for the project).

4.3. How to deal with the psychological and social needs of young people

4.3.1. Increased need for confidentiality of the provided information

The key psycho-social factors that should be taken into account when planning and conducting research involving young people is their particular susceptibility to influences. This results both from their emerging personality as well as from a fear of judgement and even sanctions that may befall a young person both on the part of the peer group and adults, on whom the young person depends mentally and financially. The latter include project staff. Taking this into account, one should:

  • Inform the research participant about the confidentiality of the information provided and the measures taken for this purpose, both by means of data collection ensuring confidentiality, as well as their anonymization at the stage of data analysis and use of the findings,
  • Complete complex assurances, including by conducting interviews (IDI, FGI) without the participation of third parties, creating conditions for completing the questionnaires that guarantee anonymity and confidentiality, including throwing auditorium questionnaires into a collection box,

 

4.3.2. Increased need for autonomy and emancipation

According to the findings of developmental psychology, people aged 15-24 are – due to shaping their identity – particularly sensitive to issues related to respect for their freedom. Consequently, their right to participate or not to participate in research should be clearly communicated and the reasons and consequences of each of the choices available should be clearly explained. This is a necessary condition.

On the other hand, positive motivation for young people to participate in evaluation research can be created by responding to their needs to move from subordinate and executive positions to the role of co-decision makers and co-creators. In order for young people to be really involved in evaluation research you have to treat them as partners with different roles, including decision-making and consultative roles, in addition to the roles of the classic examined object. This can be achieved by involving them in the various stages of the evaluation process, from reporting information needs, through co-deciding on priorities, planning, participating in implementation, and finally consulting the findings (see section 2.2).

V. DATA ANALYSIS

 

Once you finish collecting the data, you should start analysing it. This means using all the research material (information obtained with various methods) and answering evaluation questions as well as valuing the evaluated project according to chosen criteria. Therefore, at this stage, it is worth going back to the evaluation concept, which acts as a compass, leading the evaluator through the entire research process (not only information collection, but also data analysis, drawing conclusions and formulating recommendations).

 

The purpose of data analysis is:

  • Compilation and verification of collected information,
  • Description, assessment and juxtaposition of the quantitative and qualitative data that is obtained (checking how reliable and consistent they are),
  • Identification and explanation of various cause and effect relationships that will allow you to understand the mechanisms of the studied phenomena,
  • Interpretation of the obtained evaluation findings in relation to wider knowledge about the subject of the evaluation (evaluandum),
  • Obtaining detailed answers to evaluation questions and credible valuing of the evaluandum according to chosen criteria,
  • Drawing conclusions from the collected information and formulating useful recommendations based on it.

In the data analysis, you should bear in mind the principle of triangulation, i.e. the compilation of data obtained from various sources, using various research methods, by different researchers. Thanks to this, you have the opportunity to supplement, deepen and verify respective information in order to obtain a full picture of the evaluated project.

Although during data analysis the actions undertaken are common to both types of data (quantitative and qualitative), such as reduction, presentation and concluding, the obtained findings are in a different form for each of them. The comparison of these data is presented in the table below.

Before starting the data analysis, it is necessary to check whether all research materials have been anonymised, i.e. there are no personal data (names, surnames, addresses, including e-mail addresses, telephone numbers etc., as well as contextual information enabling the identification of research participants). Interviewees who participated in the qualitative part of the research (IDIs, FGIs) are given pseudonyms, e.g. taking into account the features important for the researcher. The personal information concerning research participants should be separated from the content data provided by them.

 

There are four main stages of data analysis:

1. Selection and ordering of the collected research material – during this stage, the correctness and completeness of the data are checked, the reliability of every piece of information is verified (thanks to triangulation), and data that is not useful for the purpose of the evaluation is removed. You should collect all the information and facilitate its further analysis – recordings of the interviews can be transcribed or written down in accordance with a previously prepared scheme (which includes a summary of the respondents’ statements). In the case of a survey, you should remove uncompleted questionnaires from the analysis, etc.

2. Constructing analytical categories (selecting the type of encoding and data coding – their categorisation and classification) – this means assigning codes / “labels” to each piece of information obtained, representing specific categories of information, thus allowing for the organisation of the research material.

  • In the case of closed-ended questions, the answer codes take a numerical form (e.g. “female” = 1, “male” = 2), which allows you to analyse the obtained data using statistical programs (or spreadsheets). First, you need to create a coding instruction that contains the names of codes and the numbers which were used in the questionnaire to identify answers given by the respondents to particular questions. Paper surveys require manual coding – to do this, you need to number the answers in the questionnaire, code the answers and enter this information into the database. Electronic surveys are coded automatically.
  • In the case of open-ended questions and other qualitative data, the codes for particular answers have verbal form (e.g. “training organisation”, “conducting a training”). Codes for qualitative data can be planned before or after reading the entire material. The first method is called “top-down” coding, which results from a good knowledge of the research problem and / or its grounding in a given theory. The second method is open coding (“bottom-up”), which consists of categories identified in the collected material (e.g. relating to research questions). In both cases, you need to develop a coding scheme that will organise the codes (establish a code hierarchy, superior / collective and detailed codes), so that you can present the collected information in a consistent form.

The information corresponding to the given codes can be summarised in one table, which will facilitate the search for similar or common elements for the research participants as well as information that differentiate them. It also allows you to see the relationship between the interviewees’ characteristics or situation and their statements.

Tool 7: Table For Summarising Information From Interviews

3. Analysis and interpretation of the obtained findings (explanation and assessment by the researcher of a particular issue / problem)

Data analysis is an important element of evaluation because it allows you to summarise the findings and find common and divergent elements in the collected materials. It is worth choosing and describing the method of data analysis at the stage of planning the evaluation. Data obtained during evaluation can be analysed in a number of ways. The simplest distinction is division into:

  • Quantitative data analysis (numbers, answers to closed questions) – for simple analyses you can use, for example, MS Excel, and for more complex analyses statistical programs, such as SPSS or Statistica, operated by specialists, whose services can be used if necessary.

PRACTICAL TIP

For small groups, quantitative data should not be presented in the form of percentages, i.e. informing that 20% of respondents in a group of ten have a particular opinion. Better to use absolute numbers and say that it is two people.

  • Qualitative data analysis (e.g. text, interview statements) – for simple analyses, it is enough to compile the data in a chart / matrix, and for more extensive research material, it is worth using programs that facilitate the analysis, e.g. QDA Miner, OpenCode, Weft QDA.

Some of them are briefly presented in the table below:

Own elaboration based on: Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.

 

IMPORTANT TIP

When analysing the data, it is very important to determine what changes have occurred as a result of the project and what role respective activities played in them. Therefore, it is necessary to answer the question to what extent the project activities influenced the achievement of the assumed result indicators and what was the role of project activities among other factors influencing the expected changes (see chapter 2.5).

 

When analysing data, it is worth referring to the previously described theory of change adopted as part of the description of the project logic. When planning the change at the beginning of the project, you made certain assumptions about the conditions that must be met (resources provided, implemented activities) in order to achieve the given results, i.e. you have planned the cause-and-effect chain. Evaluation verifies our theory of change – it can confirm it or show some gaps in it (e.g. missing / redundant elements) and recommend improvements for the future.

There are three general strategies for causal inference. Using a combination of these strategies can help to increase the credibility of the conclusions drawn:

Data analysis approaches for causal attribution with various options

Own elaboration based on: Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.

VI. REPORTING

 

6.1. How to make use of the results of data analysis?

After completing the qualitative and quantitative data analysis stage, you have a lot of information, which should be used properly and wisely. These data should be translated into knowledge that will allow you to make accurate decisions regarding project improvement (e.g. how to adapt it better to the needs of its recipients, how to achieve similar effects using smaller resources, how to obtain greater impact and sustainability of the results).

Based on the findings of conducted analyses, you can draw conclusions that relate to phenomena or problems identified during the evaluation. These conclusions relate primarily to the issues described in the evaluation questions but may also include issues that were additionally diagnosed during the research.

In the evaluation report, you should present not only the findings of the evaluation research, but also their interpretation (i.e. reference to a broader knowledge of the studied issue), as well as the conclusions derived from the obtained data and the accompanying recommendations. The above diagram presents the relationships between these elements. To get through this process, you can use the questions that accompany the subsequent stages (in the diagram above they are marked in italics).

Below you can find an example of the process of formulating conclusions and recommendations regarding a training project directed to NEETs (the findings refer to the quantitative part of the research).

Tool 8: The relation between the evaluation’s findings, their interpretations, conclusions and recommendations

Remember to take into account various elements related to evaluation research, e.g. used methods (qualitative, quantitative), sample selection methods and degree of responsiveness (level of return of questionnaires), which may lead to some limitations when formulating conclusions.

 

RULES FOR FORMULATING THE CONCLUSIONS:

  • Treat your conclusions critically, look at them from a distance, constantly seeking alternative explanations for the phenomena found. It is always worth consulting your conclusions with another, preferably more experienced person (“a critical friend”) who – thanks to not being involved in the evaluation – will look at them with a “fresh eye”.
  • Make sure that you correctly interpret the statements given by the research participants, e.g. by confronting the conclusions with them. If you are not completely sure about a conclusion, soften it by using the terms “probably”, “possibly”, “maybe”.
  • Do not generalise the conclusions for the whole population (i.e. people who did not participate in the research) if you used qualitative methods* or the sample you studied was not randomly chosen.
  • Learn how to avoid mistakes in drawing conclusions from our online course.

HOW TO FORMULATE THE RECOMMENDATIONS?

  • Group them thematically (e.g. project management, cooperation with partners, implemented activities, project effects).
  • Relate them to both strengths and weaknesses of the subject of evaluation. Don’t focus only on the negatives – also show those areas that work well and don’t need any changes. If you concentrate solely on positives, it will undermine the credibility of the evaluation.
  • Make sure that recommendations are detailed, precise and realistic (possible to implement), so that they are also practical, accurate and useful.
  • Assign to each recommendation: a recipient (with whom it will be agreed in advance), a deadline, and a degree of importance, as this increases the chances of them being implemented.

* In this case, the conclusions relate only to the persons who participate in the research.

 

Conclusions and recommendations can be presented in a concise table as a summary of the report, or as an independent “final product” of the evaluation. The following is an example of a recommendation table regarding the evaluation of a training project:

Tool 9A: Recommendations Table

In the simplified version, the table of conclusions and recommendations may look like this:

Tool 9B: Simplified Recommendations Table

 

6.2. What are the features of a good report?

The report is the finalisation of the evaluation process, because it presents its concept, course of research and its findings, as well as conclusions and recommendations that are based on them.

During the evaluation process, various types of reports may be written, e.g.:

The final report can be prepared in various forms, which – like the scope of content presented in them – should be tailored to the needs of individual groups of recipients (evaluation stakeholders). Examples of ways to present and promote evaluation findings include:

    • The final report in an electronic version (less often in a paper version) distributed to stakeholders and / or posted on the Internet (e.g. on the project website or the entity ordering the evaluation website),
    • Summaries of reports in the form of information folders / brochures containing key conclusions and recommendations,
    • A multimedia presentation during conferences and meetings, e.g. with stakeholders, partners,
    • An infographic posted on the project website, on social media, and sent to local media,
    • Printed posters presented at various events, e.g. conferences, picnics,
    • Films (video presentations) addressed to large audiences (including a dispersed audience), and posted on the Internet,
    • Follow-up – presentation on the effects of implementing the recommendation.

The report in the version of the extended text document may have the following structure:

    • Title page – name of the contracting institution, name of the institution conducting the evaluation (if the evaluation was external), date of preparation, authors, title (e.g. Ex-post evaluation of project X),
    • (Executive) summary – main elements of the evaluation concept, key findings, conclusions and recommendations (necessary for extensive reports),
    • Table of contents – enabling automatic access to a given page of the report,
    • List of abbreviations (and possible definitions of specialised terms),
    • Introduction – information on the commissioning institution, type and cut-off date of the evaluation, name of the evaluated project, sources of its financing, and organisation that has implemented it,
    • Subject and scope of the evaluation – a brief description of the evaluated project and its parts which were included in the evaluation,
    • Goals of the evaluation – explanation of what the evaluation was conducted for, what was expected of it,
    • Evaluation criteria and questions – an indication of how the value of the subject of the evaluation was estimated / what was supposed to be learnt through the evaluation,
    • Methodological issues – description of sources of information and research methods used, sample selection methods, course of the research, levels of responsiveness (what percentage of respondents participated in the survey).It is also worth describing the problems encountered during the implementation of the research, as well as the ways and effects of dealing with them,
    • Description of evaluation findings – a description of the qualitative and quantitative findings collected during the research, along with their interpretation, according to the adopted method of presentation (e.g. in accordance with evaluation criteria / questions). Findings from different sources and obtained with different methods should be confronted (by triangulation). Every chapter can present partial summaries,
    • Conclusions and recommendations – a concise but substantive answer to evaluation questions. The conclusions must be based on the findings of the study and the recommendations should be closely related to them,
    • Attachments / annexes (optional) – e.g. research tools used, tabular summaries, case studies, etc.

It is worth remembering that regardless of what form of report you choose, both in the case of external and internal evaluation, any changes to the content of this document require the consent of the evaluator.

  • If you want to learn more about the table of comments to the evaluation report, click here.

A good evaluation report should meet the following conditions:

  • be adequate to the terms of the contract and the needs of the recipients, be written in a language they understand,
  • contain a list of abbreviations used (and possible definitions of key terms when, for example, a report is to be presented to a wider audience that may not know them),
  • have a clear and legible structure,
  • have a concise form, and at the same time comprehensively answer evaluation questions (without “waffling”),
  • be based on credible and reliable findings that have been properly analysed,
  • present not only the obtained findings, but also its interpretation, as well as indicate the relationship between the data and the conclusions,
  • contain justified conclusions and useful recommendations related to them,
  • contain graphic elements (tables, charts, diagrams) and quotes from respondents’ statements that make the reception of the report content more attractive.

The following table will help you in verifying the quality of the evaluation report. It contains detailed criteria for its assessment. You can choose its scale (numeric or verbal) and assess your own or a commissioned report.

Tool 10: Report Quality Assessment Table

 

6.3. How to deliver what is needed for the recipients of your evaluation

The possibility of using evaluation findings depends on its type, i.e. the moment / life cycle of the project in which the evaluation is carried out.

The most chances for introducing changes are provided by ex-ante evaluation, carried out at a time when the evaluated undertaking / project has not yet started.

In the case of mid-term evaluation, the opportunities for using recommendations to introduce specific changes are limited as the project is in progress and individual actions are gradually implemented. Nevertheless, some of its elements may still be modified, e.g. in order to better adapt the ongoing activities to the needs of their beneficiaries, to ensure that the planned indicators are achieved at the assumed level, or to adapt them to the changed project implementation conditions.

The findings of ex-post evaluation can only help you in planning the next (same or similar) projects because the evaluated project has already been completed.

When evaluation findings are related to organisational or management issues, you can use them for current work.

The dissemination of evaluation findings (most often in the form of conclusions and recommendations) among its stakeholders is a very important stage, as it contributes to a better understanding of the need for change, to strengthening the cooperation, commitment and motivation to act, as well as to obtaining support in this process.

Sharing the findings of the evaluation with other people / entities may show your ability to self-reflect on the value and quality of your activities. It is a sign of your readiness to engage in discussion on various aspects of the subject of the evaluation, as well as the ability to assess its strengths and weaknesses and the desire to develop and improve in cooperation with other stakeholders.

Tool 11: Dissemination Of Evaluation Findings Table

AFTERWORD

 

If you are reading these pages, you probably have read the whole thing and have learned how to conduct evaluation of your projects and what it is for, especially if these are youth employment projects and even more if you are interested in assessing their real (net) impact.

Thanks to the participatory approach to the evaluation you acquire information that is vital for key decisions about the project and also very important for the stakeholders, especially the donors. What is more, the beneficiaries get empowered and the project team get better informed, coordinated and motivated. Finally, you are on the way towards a more relevant, effective, sustainable, efficient and simply better project!

To make it easier to prepare your evaluation – you can use templates of evaluation tools – see Attachments. And to make your understanding of the evaluation even deeper – check the online course and networking activities of the Youth Impact project – all available at the website www.youth-impact.eu.

Learn more

Interesting sources to learn more about evaluation available online:

REFERENCES

 

  • Babbie E. (1st edition in 1975) The Practice of Social Research
  • Babbie E. (1st edition in 1999) The Basics of Social Research.
  • Bartosiewicz-Niziołek M., Marcinkowska-Bachlińska M., et al (2014) Zaproszenie do ewaluacji, zaproszenie do rozwoju [Invitation to evaluation, invitation to development], KOWEZiU, Warszawa (s.69-85)
  • Bartosiewicz-Niziołek M. (2012) Ewaluacja programów i przedsięwzięć społecznych – katalog dobrych praktyk [Evaluation of social programmes and undertakings – catalogue of good practices], ROPS, Kraków
  • Bienias S., Gapski T., Jąkalski J. (2012) Ewaluacja. Poradnik dla pracowników administracji publicznej [Evaluation. A guide for public administration employees] Ministerstwo Rozwoju Regionalnego, Warszawa
  • Blalock H. (1st edition in 1960) Social Statistics.
  • Checkoway, B., Richards-Schuster, K. Participatory evaluation with young people, W.G. Kellogg Foundation
  • Ferguson G. A., Takane Y. (1st edition in 1971) Statistical analysis in psychology and education.
  • Flick, U. (1st edition in 2007). Designing Qualitative Research.
  • Flick, U. (1st edition in 2007). Managing the Quality of Qualitative Research.
  • Gibbs, Graham R. (2009) Analyzing Qualitative Data.
  • Kloosterman, P., Giebel, K., Senyuva, O., (2007) T-Kit 10: Educational Evaluation in Youth Work, Council of Europe Publishing
  • Kvale S. (2007) Doing Interviews.
  • Lisowski G., Haman J., Jasiński, M (2008) Podstawy statystyki dla socjologów [Basics of statistics for sociologists], Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Jak napisać raport ewaluacyjny [How to write an evaluation report], Ministerstwo rozwoju Regionalnego, Warszawa
  • Maziarz M., Piekot T., Poprawa M., i inni (2012) Język raportów ewaluacyjnych [The language of evaluation reports]. Ministerstwo rozwoju Regionalnego, Warszawa
  • Miles, M. B., Huberman, A. M. (1st edition in 1983) Qualitative Data Analysis.
  • Nikodemska-Wołowik A., M. (1999). Jakościowe badania marketingowe [Qualitative marketing research], Polskie Wydawnictwo Ekonomiczne, Warszawa
  • Peersman, G. (2014). Overview: Data Collection and Analysis Methods in Impact Evaluation, Methodological Briefs: Impact Evaluation 10, UNICEF Office of Research, Florence.
  • Rapley T. (2007) Doing Conversation, Discourse and Document Analysis.
  • Rogers, P. (2014). Overview: Strategies for Causal Attribution, Methodological Briefs: Impact Evaluation 6, UNICEF Office of Research, Florence.
  • Silverman, D. (1st edition 1993) Interpreting Qualitative Data.
  • Wieczorkowska G., Kochański, P., Eljaszuk, M. (2005) Statystyka. Wprowadzenie do analizy danych sondażowych i eksperymentalnych [Statistics. Introduction to the analysis of survey and experimental data], Wyd. Naukowe Scholar, Warszawa
  • W.K. Kellogg Foundation (2004). Logic Model.

ENTREPRENEURSHIP TOOLKIT 

Your leverage to better projects supporting youth entrepreneurship 

 

TOOLKIT PDF

TOOLKIT E-BOOK

 

Authors: PEDAL Consulting s.r.o, Bratislava (2021)

 

1. Introduction

 

The purpose of this toolkit is to briefly present practical tools supporting the evaluation of youth           entrepreneurship support actions. In other words, the aim of the toolkit is to support evaluation of projects aimed at increasing the entrepreneurial skills of young people (aged 15-24) who, more than other age groups in the Visegrad region, face difficulties in transition between school education and work.

The main recipients of this toolkit are business support agencies, consultancy firms, business incubators and accelerators, social enterprises, NGOs and other small or medium-sized entities, which want to analyse one of their projects or statutory activity in the above-mentioned area. Such analysis will contribute to:

  • Measuring the project’s effectiveness in achieving its goals and results,
  • Assessing the project’s usefulness for the participants and sustainability of the project results,
  • Assessing whether project activities should be continued or even scaled up,
  • Evaluating the project impact with special regard of the project stakeholders and a wider community (their social environment),
  • Measuring the project’s efficiency in terms of resources engaged for its implementation and delivery of results. 

This toolbox is a supplementary material to the course ‘Towards better projects – blended learning course on evaluation of entrepreneurship support actions’, available on the e-training platform HERE. The main idea behind it, is to guide the course participants to its practical application. To do this, we have developed a number of resources to help in this effort. The toolbox is intended to equip organizations, which aim to develop and/or increase the entrepreneurial skills among young people with practical tools to measure and continuously improve their impact. 

Lastly, this toolkit is primarily targeted to helping you to get the most out of your entrepreneurship      projects through evaluation.

The toolbox was developed by PEDAL Consulting (Slovakia), in cooperation with FIAP e.V. (Germany), Jerzy Regulski Foundation in Support of Local Democracy (Poland) and Channel Crossings (Czechia), as part of the Youth Impact project, financed by the EEA Financial Mechanism and the Norwegian Financial Mechanism. The Youth Impact project seeks to provide tools and services to improve the ability and capacity of implementers of Youth Employment and Entrepreneurship Support Actions to efficiently evaluate the impact of their activities and is being carried out in the years 2019-2022.

 

1.2.Characteristics of the targeted age group

 

As pointed out above, this toolkit is targeted at imparting practical tools to organisations supporting the evaluation of projects aimed at increasing the entrepreneurial skills of young people who face difficulties in transition between school and work. The situation of these young people most often referred to as NEETs (Not in Education, Employment or Training) can be described as follows:

  • Their long-term unemployment rate was at 5.9% in 2017 (EUROSTAT).
  • They have an uneasy position when striving for entrepreneurship. They have very limited access to financial resources, bank loans and credits because their bank history is very short or not existing. Therefore, they have no or very little capital to base their own businesses or enterprises on.
  • Those who want to start their own business have a very disadvantageous position regarding experience. Either just after graduating from secondary or tertiary education with no or very little experience in the field of their degree, or with just a few years of work practice, the demands of competitive market and complexity of running own enterprise may often exceed far what they are able to invest.
  • As a result of the COVID19 pandemic, many young people have lost projects, or were on the cusp of entering the workforce, have been exposed to the risk of missing out on building much-needed skills and experience during the crucial early stages of their careers.

It is therefore against this background that we set up this toolkit to enable organisations that are running Youth Entrepreneurship Support Action projects to help them assess and evaluate their projects and remedy any redundancies that may affect their quest to support young entrepreneurs to build scalable and innovative businesses that improve lives.

2. What is our approach to evaluation? What are the benefits? 

 

If you have an already existing youth entrepreneurship support action (project or programmes aimed at supporting youth entrepreneurship), or thinking about starting one up, then the approach to evaluation presented in this toolkit can be of use for you to find out and demonstrate if you are making the desired impact. 

Even though there is no single definition of the term evaluation, the UNDP defines evaluation as an assessment, of an activity, project, programme, or institutional performance. Evaluation should be conducted as systematically and impartially as possible. It analyses the level of achievement of both expected and unexpected results by examining the results chain, processes, contextual factors, and causality using appropriate criteria.  An evaluation should provide credible, useful, evidence-based information that enables the timely incorporation of its findings, recommendations, and lessons into the decision-making processes of organizations and stakeholders (UNDP, 2019). Evaluation differs from monitoring and audit. For more information see the Toolkit section. 

Evaluation is a great idea if you want to: 

  • Find out how well you are achieving your goals and results,     
  • Learn if the needs of the target groups were identified properly or if your project is tackling real problems,
  • Find out what impact your project or activities are having on your participants and others,
  • Assess whether or not the project outcomes have justified the effort,
  • Improve the outcomes of your partnerships. 
  • Identify opportunities to expand the scope of your operations,      
  • Use results to promote the project,
  • Report progress to stakeholders or senior management,      
  • Attract funding, as well as more or different partners.

Our approach therefore is to help you evaluate your project and show its benefits; give you some handy tools to use in your evaluation; and prescribe simple and effective methods of using evaluation to grow your operation or activities.  

And do not worry if you have an existing project and are yet to do any evaluation. It is never too late to start! Remember though that you cannot conduct every type of evaluation on the late stage of the project. 

The primary benefits of evaluation are:

  • The use of impact evaluation allows you to evaluate not only whether the changes planned in the project have actually taken place, but also whether and how far the expected results of the project were actually the result of project activities.     
  • The evaluation conclusions contribute to the optimization of actions for a specific project. It allows you to predict difficulties before the project starts (ex-ante evaluation) or notice problems at the early stage (ongoing or mid-term evaluation).
  • It provides recommendations at the end of the project (ex-post evaluation) – these recommendations could help management in making informed decisions about specific activities sessions or even future projects. 
  • The scope of possible use of evaluation in management depends primarily on the selected evaluation criteria. For example, the use of the project efficiency criterion allows us to assess whether a sufficient number of people, money and time were involved in the implementation of the project or whether the project team lacked any competences or other resources were insufficient. On this basis, you can decide to strengthen your team, budget or project planning.
  • The results of the evaluation can be a tool for the promotion of your organisation and your achievements. For example, case studies illustrating the stories of successful recipients or data showing an increase in their competences can be presented on social media, on a website or during public presentations to promote the effectiveness of the organisation.
  • Results of evaluation research can be used in communication with sponsors, including in grant applications. For example, including data on effectiveness, sustainability, relevance, utility, and other features of implemented activities in answering standard questions in application forms for financing similar projects.
  • Evaluation can also be helpful in recruiting volunteers, especially when its results show the effects of work in the organisation, the extent to which the organisation has improved the lives of certain people or communities.
  • Engaging the participants in the evaluation of a project, can also increase their enthusiasm and motivation. For example, thanks to their impact on the scope, evaluation criteria, as well as participation in commenting on preliminary results, their sense of agency increases, it illustrates the link between the work performed and the project’s objectives, organisation and own values. Participation in the evaluation process increases the participant’s knowledge and empowerment in many areas affecting the effectiveness of organisation: from management issues to substantive aspects related to the subject of the project.
  • The involvement of project partners in evaluation – especially those who work with its beneficiaries. For example, involving the external partners (e.g., employers, institutions such as labour offices, associations of the group influenced by the project) of the project in the evaluation allows to diagnose problems, which limited the project effectiveness, relevance, or utility. In addition, describing and resolving the questions bothering these partners in the evaluation report can significantly increase the level of trust and cooperation with them also in next projects.
  • In the field of projects related to vocational activation, employment and entrepreneurship of young people, the participation of beneficiaries is of key importance for the success of their evaluation – it allows to assess how project activities influenced the expected changes in the life of recipients. In addition, it can help in planning the next project to meet their needs and contribute to the achievement of the objectives.
  • Evaluation is based on the methodology of social sciences and therefore provides reliable answers to questions relating to a particular project and the organisation implementing it.
  • Project evaluation may also be suitable for comparing and finding out which individuals and with what characteristics are able to start a business and why others are less likely to do so.

In summary, evaluation has many benefits. The problem, however, is that knowledge about them is often abstract, not based on experience. Meanwhile, the systematic use of evaluation and its various techniques can support managing an organisation, strengthening its image, training, and motivating staff, obtaining funds, and above all effectively fulfilling the mission of the organisation.

3. How to use this toolkit

 

How you use this toolkit depends on where you are in the ‘life cycle’ of your youth entrepreneurship support action. 

If you want to start a project and build in regular evaluation from the beginning, you can use this toolkit in the planning stages of your project (ex-ante evaluation). 

If you already have an existing project but not sure how to measure the impact of what you are doing, then this toolkit can help you identify the information you need, show you how to gather it, and what to do with it once it has been collected (mid-term or ongoing evaluation). 

If you have an existing project and have been evaluating what you are doing, then you might want to focus on the sections that can refine your approach or follow up the references at the end of this document (ex-post evaluation). 

4. Type of youth entrepreneurship support action 

 

There is no single ‘right’ model of youth entrepreneurship support action, or only one proper youth entrepreneurship project. 

Some are targeted to different categories up on the entrepreneurship value chain that is, idea stage or growth stage. Others are more targeted to specific segments of the population for example, unemployed youth, women, immigrants and people with disabilities, or victims of forced migration. 

What your project looks like will depend on what you want to achieve and how you want to achieve it. In turn, how you go about your evaluation will depend on the type of project you have and what the evaluation objectives are.     

Once you are clear about the kind of project you have, or want to have, then you are in a better position to know what and how you want to evaluate and what you want to get from the evaluation.

Your project is bound to or may change over time (based on your experiences), and evaluation can      identify the best way to move forward. For example, it might show if your project has the right impact which you want to achieve. Or that you need to: 

  • Improve a certain element of the project
  • Redefine existing roles and responsibilities 
  • Bring in additional resources
  • Change programme structure
  • Bring in a new partner
  • Expand an existing project
  • Develop a new project
  • Plan better new programs

5. Types of evaluation according to the stage of evaluation

 

There are different evaluation types that vary mainly depending on the stage of the project. These types of evaluation are ex-ante evaluation, mid-term, on-going evaluation, ex-post evaluation (Types of evaluation, 2013).

 

Table 1: Types of evaluation according to the stage of project 

Stage of Project Purpose Types of Evaluation
Conceptualization Phase Helps prevent waste, adjust a project to its recipient´s needs, and identify potential areas of concerns while increasing chances of success. Ex-ante evaluation
 Implementation Phase  Optimizes the project, measures its ability to meet targets, and suggest improvements for improving efficiency, relevance, effectiveness. Ongoing or mid-term evaluation
Project Closure Phase Insights into the project’s success/failure, utility, sustainability and impact, and highlight potential improvements for subsequent projects. Ex-post evaluation

SOURCE: adapted from Nanda, 2017 

 

Below we provide brief characteristics of selected types of evaluation. 

 

Table 2: Characteristics of selected types of evaluation

Ex-ante evaluation

This evaluation is used before project implementation. It generates data on the need for the project and develops the baseline for subsequent monitoring. It also identifies areas of improvement and can give insights on what the project’s priorities should be. This helps project managers determine their areas of concern and focus and increases awareness of your project among the target population prior to launching it.

When to use it: 

New project development

Project expansion

Helps make early improvements to the program.

Allows project managers to refine or improve the project.

Mid-term or ongoing evaluation

Process evaluation occurs once project implementation has begun, and it analyses how effective, relevant and efficient your program’s procedures are. The data it generates is useful in identifying inefficiencies and streamlining processes and portrays the project’s status to external parties.

When to use it: 

When project implementation begins

During operation of an existing project

Provides an opportunity to avoid problems by spotting them early.

Allows project administrators to determine how well the project works.

Ex-post evaluation 

This evaluation is conducted after the project’s completion or at the end of a project cycle. It generates data about how well it delivered benefits to the target group. It is useful for project administrators to justify the project, show what they have achieved, and lobby for project continuation or expansion.

When:

At the end of a project

At the end of a project cycle

Provides data to justify continuing the project.

Generates insights into the effectiveness and efficiency of the project.

SOURCE: adapted from Nanda, 2017 

6. How do I evaluate a project’s impact?

 

Just as there is no single ‘right’ model of a type of entrepreneurship project, there is also no single ‘right’ way to go about an evaluation. You might want to adopt more informal or more structured approach. Either is fine if it meets your needs. In any case, a certain amount of planning is of course important to achieve useful/meaningful results.

As this toolkit is aimed at organisations implementing or developing entrepreneurship projects targeted at young people, discovering if you are making an impact, measuring and continuously improving it can be achieved through a specific type of evaluation – the impact evaluation. 

Impact evaluation can be understood and conducted in various ways. Keeping in mind that this type of evaluation is key in analysing the changes that occurred thanks to your project. Based on the results of the impact evaluation, you will be able to adjust and/or improve the existing project as well as plan better new projects. However, this evaluation is most often used for analysing causality to conclude how much observed change is due to the project activities and influence.

Table 3: Characteristics of impact evaluation

Characteristics of impact evaluation
When?
  • At the end of the project
  • After a defined period of time when the project has ended
  • At pre-selected intervals in the project
What?
  • Assesses the intended change in the target population’s well-being
  • Accounts for what would have happened if there had been no project
Why?
  • To show proof of impact by comparing beneficiaries with control groups (not participating in a project)
  • Provides insights to help in making recommendations for preparations of new projects and/or to policy and funding decisions
How?
  • A macroscopic review of the project, coupled with a survey of project participants, to determine the effects achieved. Insights from project officers and suggestions from project participants are useful, and a control group of non-participants (very similar or identical in respect to some features with project recipients) for comparison is needed.
Questions to ask
  • What changes in project participants’ lives are attributable to your project?
  • What would those not participating in the project have missed out on?

Source: Nanda, 2017

7. Phases of Evaluation 

 

It may help to think of the evaluation process as a cycle involving preparation, information gathering, information analysis, information use, further preparation and so on. 

In the framework of Evaluation of Entrepreneurship Support Actions Life Cycle below, which forms the basis for this section of the toolkit, these phases have been called: 

  • Preparing – the success of your evaluation depends largely on the thinking and decisions you take in this phase,
  • Gathering information – you need to collect the information most relevant to your purpose, and the way that guarantees it is reliable,
  • Analysing information – you need to organise and interpret your information and identify your key findings, 
  • Using information – this phase is about sharing the findings and making decisions about the future based on the outcomes of the evaluation process.     

 Figure 1: Evaluation of Youth Entrepreneurship Support Actions Life Cycle

 

In the following sections we explore each of these steps a little further and provide you with some support tools to help you along the way.

8. Preparing your evaluation

Figure 2: Evaluation of Youth Entrepreneurship Support Actions Life Cycle – Phase 1

Before you start preparing an evaluation, you should think about the entrepreneurship support action you want to evaluate and define:      

  • What does this project try to achieve and why:
    • Is it meant to develop certain entrepreneurial skills? Which skills and how many people are to get them?
    • Is it focused on developing new businesses? How many businesses are to be established and supported? What level of their development is meant to achieve? 
  • What are the risks and assumptions:
    • What are the skills gaps of the target group? 
    • What is their motivation and attitude to entrepreneurship? 
    • What resources are available? What obstacles for the business development are to be overcome?
  • And how can you tell whether the project is a success once it’s implemented:
    • What are the indicators of achievement of the project goals?
    • How will you know these results are due to the evaluated project (e.g., how do you know if the increased number of new businesses is a result of the project?

Planning a good youth entrepreneurship support action can be a tricky task. Designing a good project requires understanding the situation in the region, defining a real problem that can be solved by the evaluated intervention. Other vital requirements are identification of the target groups and their needs, specifying objectives that are attainable within given timeframe and available resources, designing the most appropriate activities. The activities must be designed to serve the production of outputs and outcomes, which lead to the achievement of a defined objectives and a high-level goal.

Furthermore, the project implementers need to see the bigger picture during the project implementation, keeping in mind key questions: what are they trying to achieve, why and how they want to prove? 

  • Do they plan to develop certain entrepreneurial skills? 
  • Ensure there will be a certain number of business plans developed? 
  • The number of new businesses will increase by some %? 
  • What are the risks and assumptions (e.g., what are the skills gaps of the target group? 
  • What is their motivation and attitude to entrepreneurship? 
  • What resources are available to achieve planned outcomes and how you can tell whether the project is a success once it is implemented?
  • And finally how will you know the observed effects are caused by the evaluated project?

Understanding the logic of the project is the key for preparing yourself to design an evaluation and provides you with information on the data produced within the project, which can be used in the evaluation. Often the Logical Framework Matrix or Logframe is prepared as part of seeking financial support for the project. Otherwise, you can reconstruct the project logic yourself by asking key questions about seven key areas of the project:

  • Purpose – why has this project been initiated? What problem is to be tackled? What change is expected? 
    • If the results have been achieved, then certain effects may result in the target group. For example, the increase in knowledge can lead to a change in the participants’ behaviour.
  • Outcomes – what results are expected? (e.g., increase in knowledge/entrepreneurial skills among participants, establishment of own firms by the project participants)
  • Outputs – what are the deliverables?
    • Direct results of the activities are described. Following the “if-then” logic, this means that if an activity is carried out, then certain results are expected. Examples are business plans developed within the project; skills certificates issued to the project participants. 
  • Activities – what actions have been planned to deliver the outputs? (e.g., workshops, trainings, mentoring)
  • Indicators of achievement – how will we know if the project has been successful?
  • Means of verification – how the reported results can be verified?
  • Risks and assumptions – what are assumptions which underlie the structure of the project and what are the risks for achieving?

The template of the Logical Framework Matrix can be found in section 12.3.1. 

8.1 Concept of evaluation 

Similarly to developing a project, good evaluation requires answering a set of key questions that will help you define the concept of evaluation. 

Answering the 5 “W” questions is a good way to start: 

WHY are you going to do the evaluation? It is necessary to define the purpose of evaluation as the next parts of the concept depend on this question.

WHAT are you going to evaluate and what resources are needed? The subject and scope of the evaluation has to be clear, otherwise you might end up doing an evaluation for which you do not have the resources, expertise or time. The evaluation questions and evaluation criteria need to be formulated at this stage.

WHO will conduct the evaluation? Is it going to be a self-evaluation (staff implementing the project conduct the evaluation), internal evaluation (evaluation is conducted by own staff not participating in the project implementation) or external evaluation (evaluator from outside). Each type has its pros and cons. 

WHEN are you going to do the evaluation? Before the launch of the project (ex-ante evaluation)? During the project implementation (mid-term or ongoing evaluation)? Or after the end of the project (ex-post evaluation)? 

HOW will you conduct the evaluation? What sources of information will you use? What methods and tools of gathering necessary data will you use? What tools? 

The following sections will guide you through these questions. 

8.2 Be clear about the purpose subject, and scope of your evaluation

What is it that you want to find out? What do you want to learn from the evaluation results? How do you plan to use the results? 

EXAMPLE

Defining the purpose of evaluation

For more than 25 years, Junior Achievement Slovakia (JA Slovakia) has been helping teachers develop entrepreneurship, economic thinking, and financial literacy of students in Slovakia. 

The mission of JA Slovakia is to help teachers to develop entrepreneurship, economic thinking, and financial literacy among students of primary and secondary schools. This is done primarily by the means of experiential learning, in which experienced professionals are involved. JA Slovakia is a member of a worldwide network of 115 JA organizations and a member of a network of 41 JA Europe organizations. For 100 years, this network has been bringing education and skills development in job readiness, financial literacy, and entrepreneurship around the world. 

The YEEAs run by the organization 

In the programs ‘Applied Economics’ and ‘Entrepreneurship in Tourism’, students have the opportunity to run their first real business in a student company. Skills for employability and creating their own idea are developed in the ‘Skills for Success’ program. The development of ethical aspects of business and the moral values ​​of the individual is the subject of the ‘Ethics in Business; program. Pupils’ financial literacy is increased through the ‘More than money’ and ‘Me and money’ programs, which are created in accordance with the National Financial Literacy Standard. The youngest ones can prepare for their future profession through the ‘Fundamentals of Business’ program.

The scope of evaluation in JA

JA Slovakia has long used the available evaluation options taking into account their time and financial constraints. The main purpose of the evaluation is to assess the growth of the participants´ knowledge by testing their skills at the beginning of the education / school year and at the end. The aim is to compare the results and determine the progress over a period of about 10 months to see if the programme is addressing the needs of the target group or if any improvements are needed. 

Source: Bednárová, 2021; Junior Achievement Slovensko, n.o., n.d.

The subject and scope of the evaluation needs to be defined clearly. Are you going to evaluate a specific entrepreneurship support project? Or only a part of it? What exactly do you want to learn? 

Do the project goals, activities match the goals and priorities of the organisation?

EXAMPLE

The evaluation will focus on the programme ‘More than money’.

Financial literacy test: Pupils involved in the ‘More than money’ program take a ‘central entrance’ test at the beginning of the school year. The aim of testing is to determine their initial level of knowledge in the field of financial literacy.

At the end of the school year, students take a central exit test. Its aim is to verify the level of knowledge achieved by students after completing the program.

Source: Bednárová, 2021; Junior Achievement Slovensko, n.o., n.d.

8.3 Define the evaluation criteria and questions 

The next step is to decide about the evaluation criteria that are linked to the purpose of evaluation. The criteria provide a perspective through which you can look at your project and formulate the evaluation questions.

The criteria describe the desired attributes or aspects of the project which you want to verify/assess: e.g. an intervention should be relevant to the beneficiaries’ needs, coherent with other interventions, effective in achieving planned objectives and outcomes, deliver results in an efficient way, and have positive impacts that last (OECD, n.d.). 

You will also need to identify some key questions (evaluation questions) that you want the evaluation to answer. For example, you may want to ask how participants have benefited from being in the project under evaluation, or if it achieved what was expected to do, or if all parties involved in the project participated as planned. 

Criterion: RELEVANCE

Evaluation question: IS THE INTERVENTION DOING THE RIGHT THINGS?  

The extent to which the intervention objectives and design respond to beneficiaries’ needs

Criterion: EFFECTIVENESS 

Evaluation question: HAS THE INTERVENTION ACHIEVED ITS OBJECTIVES?

The extent to which the intervention achieved, or is expected to achieve its outputs, outcomes and objectives.

Criterion: EFFICIENCY 

Evaluation question: HOW WELL ARE RESOURCES USED?

The extent to which the intervention delivers, or is likely to deliver, planned outputs and outcomes in an economic and timely way. 

Criterion: UTILITY

Evaluation question: HOW USEFUL ARE PROJECT OUTCOMES FOR ITS RECIPIENTS?

The extent to which project outputs and outcomes were useful for their recipients.

Criterion: IMPACT 

Evaluation question: WHAT DIFFERENCE DOES THE INTERVENTION MAKE?

The extent to which the intervention has generated significant, positive or negative, intended, or unintended, higher-level effects.

Criterion: SUSTAINABILITY 

Evaluation question: WILL THE OUTCOMES OF THE INTERVENTION (ITS BENEFITS) LAST?

The extent to which achieved outcomes will be sustainable in time, after the project completion and financing.

Criterion: COHERENCE 

Evaluation question: HOW WELL DOES THE INTERVENTION FIT? 

Synergies and interlinkages between the evaluated project and other projects implemented by the same organization or a wider programme within a given project is implemented.

All your evaluation questions need to be related to the scope and objective(s) of the evaluation. Keep it simple and achievable. Below are some examples of possible evaluation questions and related evaluation criteria proposed for an existing project. 

EXAMPLE 1.

Project outcome: development skills for success – basic skills for employability and a proactive approach to entrepreneurship development, own work, the ability to solve problems, identify, design and develop their own idea.

Evaluation questions: To what extent have the planned outcomes been achieved (EFFECTIVENESS)? Do the project goals, activities and outcomes match the objectives and priorities of the organization (COHERENCE)? Are the project outcomes useful for its recipients (UTILITY)? Are the project outcomes sustainable (SUSTAINABILITY)? How efficient was the use of the project resources (EFFICIENCY)?

EXAMPLE  2.

Productive post-project pathways: 

Objective: To improve youth transition from university to entrepreneurship debut options. 

Evaluation question: How, and to what extent, is the project helping the youth in successful transition to entrepreneurship or to work (IMPACT)?

EXAMPLE 3. 

Tangible results for the lower qualified participants: 

Objective: To equip participants with lower academic qualifications with the necessary skills to start enterprises in their communities by the end of the project. 

Evaluation question: How is the project able to harness the entrepreneurship skills of these young people to help turn their ideas into profitable enterprises (UTILITY).

Source: own elaboration

In table you can find examples of evaluation criteria and questions. 

Table 4: Evaluation criteria and questions

QUESTION  EVALUATION CRITERIA
Was the project budget sufficient to achieve the plan correctly? Efficiency
To what extent have the planned objectives and results been achieved? Effectiveness
How are project resources used? Efficiency
What went wrong and why? Effectiveness, Impact
Do the project goals, activities match the goals and priorities of the organization? Coherence
Are the results obtained permanent? Sustainability
What helped the implementation of the project? Efficiency
What slowed down the implementation of the project? Efficiency
How does the community perceive the project? Impact
to what extent were project results useful for its recipients? usefulness
To what extent was the project relevant to the needs of its recipients?  relevance

Source: own elaboration

Having defined the questions your evaluation should answer, you ought to proceed with setting the evaluation indicators that are key for measuring project effectiveness.

8.4 Project indicators

Project indicators measure effects against project goal and objectives. In such context an indicator is used as a benchmark for measuring intended project effects. Indicators can be quantitative (e.g., a number, an index, ratio or percentage) or qualitative (depicting the status of something in more of qualitative terms, e.g., whether a local start-up ecosystem is more developed after the project than it was before the beginning of the project – according to opinions of key informants or judging by the quality of local regulations concerning business activities). Indicators can show if your project has produced the expected outcomes. Why defining indicators is important in the evaluation process (Selecting project indicators, 2013)?

  1. At the initial phase of a project, indicators are important for the purposes of defining how the success of the intervention will be measured and what level of given indicator or its dynamics should be considered as satisfactory for meeting the respective objective.
  2. During project implementation, indicators help assess project progress and highlight areas for possible improvement. 
  3. At the final phase, indicators provide the basis for which the project assessment may become dubious.

There are three types of project indicators that are widely acknowledged and can be used when conducting evaluation based on such criteria as effectiveness, efficiency or impact:

  1. Process indicators: are used to measure project processes or activities. For example, this could be ‘the number of training activities organised in period XY’.
  2. Outcome Indicators: measure project outcomes. Outcomes are results of a project. For example, it could be ‘how the level of entrepreneurship skills was improved’.
  3. Impact Indicators: measure the long-term impacts of a project, or simply the project impact, e.g., ‘the number of new start-ups established by youth entrepreneurs’.

Also, other criteria (relevance, utility, coherence) and evaluation questions related to them can have their own indicators, but any appropriate indicator must have particular characteristics (Bureau of Educational and Cultural Affairs, n.d.) that are listed in table 5.

Table 5: Characteristics of indicators

Characteristic Description 
Specific:  Probably the most important characteristic of indicators is that they should be precise or well defined. In other words, indicators must not be ambiguous. Otherwise, different interpretations of indicators by different people implies different results for each.
Measurable An indicator must be measurable. If an indicator cannot be measured, then it should and must not be used as an indicator.
Achievable / Attainable The indicator is achievable if the performance target accurately specifies the amount or level of what is to be measured in order to meet the result/outcome. The indicator should be achievable both as a result of the program and as a measure of realism. The target attached to the indicator should be achievable.
Relevant Validity here implies that the indicator actually measures what it is intended to measure. For example, if you intend to measure impact of a project on development of specific entrepreneurship skills, it must measure exactly that and nothing else.
Time-bounded The system [monitoring and evaluation system and related indicators] allows progress to be tracked in a cost-effective manner at the desired frequency for a set period

Source: adapted from Selecting project indicators, 2013; Bureau of Educational and Cultural Affairs, n.d.

8.5 Identify the type of information you need 

Measuring success may require the collection of information before, during and after your project. You have to identify the types of information you will need earlier on and ensure that they are ready and available when you need them.

There are two broad categories of information to consider:

  • Quantitative: this is information that is concerned with counting and measuring things, like attendance, sessions, or scores.
  • Qualitative: this information is concerned with people’s feelings, thoughts, perceptions, attitudes, behaviour change and beliefs, and may include things like improved participant attitudes to specific project sessions identified through observation, interviews, and feedback forms.

Below are some ideas for how you might capture, through questions, the different categories of information.

Sample questions for evaluating a project 

Knowing what the main purpose of evaluation (see the previous section) is, you can think about the specific questions to be used in the tools to collect qualitative and quantitative information (e.g. tests, questionnaires, etc.)

Questions you can use in tools gathering quantitative data 

  • How many trainings focusing on the development of entrepreneurial skills were organized? 
  • How many staff from the organisation and business community were involved in the mentoring program? 
  • What was the increase in the number of participants that occurred after involving the local business community in the mentoring project? 
  • What was the increase in number of participants choosing sessions from the project because of the mentoring program? 
  • How many participants have gained positive outcomes from participating in the activities?     
  • How many participants were there at the beginning of the program and how many participants completed the program?

Questions you can use in tools gathering qualitative data 

  • What were your feelings about having a mentor before you joined the project? 
  • How do you feel about the experience now that you have been mentored for a year? 
  • What (if any) parts of the experience did you find most enjoyable? 
  • What (if any) parts of the experience did you find challenging? 
  • What advice would you give other participants coming into the mentoring program next year?     
  • Why did some participants terminate the program? 
  • Please think about the period after you started participation in the project. Have you observed any significant change in the way you live your life in this period? 
  • (If yes) Please mention the areas in which you observed these changes. Please say what was the main factor that caused each of these changes?
  • Please think about the period after you started participation in the project. Have you observed any significant change in the way your start-up was functioning in this period? 
  • (If yes) Please mention the areas in which you observed these changes. Please say what the main factor was, that caused these changes. 
  • (If the project was mentioned as main factor) Which specific workshops moved your start-up the most? 
  • Is there anything in the project you think should be done another way (concerning the organization, lecturers, times, etc.)? What would you like to be different?
  • What should be different (if anything) so that you can make more use of contact with other participants, to benefit from being part of the community? 

8.6 Be clear about your stakeholders

In any youth entrepreneurship support action, there are multiple stakeholders. Stakeholders can vary from project representatives of project partners including business incubators, accelerators, universities local or regional administrative authorities, employers’ organisations, donors, mentors, trainers, coaches, project staff and participants. 

You will need to think about the audience for your evaluation, who your stakeholders are, if and how some of them can be involved in the evaluation process, what information they can provide you with or what evaluation criteria and questions would be important for them.

If the evaluation conclusions are to be implemented than it is vital to identify the stakeholders´ evaluation needs in the preparation phase. Otherwise, you might not be able to satisfy them, as the results of evaluation will not provide information expected by the stakeholders. 

If possible, integrate stakeholders into the evaluation. This allows a comprehensive insight and change of perspective. Especially the involvement of the target group (beneficiaries) is an important issue, because you are going to be in touch with them anyway, so they are already there. Participative evaluation is a simple tool to ensure integrating stakeholders into the evaluation.

8.7 Identify potential sources of help 

Before gathering your information, think about the kind of help you may need and when you may need it.

Potential sources of help 

  • An independent person, such as someone from another organisation carrying out similar projects, stakeholder, or business, could help during the preparation and planning part of your evaluation. 
  • External experts could assist with the evaluation concept, design of surveys and data analysis, as well as methodological supervision of your research tools (interview scenarios and questionnaires). 
  • Some information gathering might best be done independently by a third party (e.g., collecting and analysing some data by an external expert, statistics collected by public authorities, etc.). 

Depending on the nature of the project, its stakeholders could participate in the designing of the evaluation concept, support data collection, help in interpretation of the results and formulating recommendations bringing about their experience and expertise in the project. 

8.8 Identify the sources of the information you want to gather 

To get a good perspective about the participants in the project, you could think about using internal information such as: 

  • Attendance records 
  • Retention rates 
  • Post-project tracking data (e.g., data on developments of businesses launched by the participants, data on project beneficiaries employment status, participation in mentorships etc.) 
  • Participant portfolios on participants’ businesses developments launched during the project (e.g., technology, social enterprise, business, novel ideas) 
  • Participant behaviour records (e.g., time-out, commitment, retention) 
  • Participant’s, parent’s, professional assistant’s or instructor’s opinions on particular aspects of the project, including their needs, satisfaction 
  • Participant achievement data (e.g., pre-tests and post-test results).

From perspective of youth entrepreneurship support action implementer, you could think about: 

  • Interviewing corporate volunteers 
  • Records of financial and in-kind support 
  • Level of media coverage or reach of the communication and dissemination efforts 
  • Asking participant about their satisfaction (e.g., using a survey and interviews)
  • Numbers of participants directly and indirectly impacted by the activities or projects 
  • Sales figures or other evidence of marketing success 
  • Interviewing the project staff.

To ensure objectivity of the evaluation results, you should also consider information that could be available from external sources. These can include statistics, surveys, or analysis developed by: 

  • labour offices that might keep statistics of the number of graduates registered as unemployed persons in the monitored period;
  • municipalities that can monitor the entrepreneurship activity in the respective region;
  • secondary, tertiary and other educational institutions, which may provide statistics collected on their graduates / alumni clubs;
  • government level / ministry of labour, social affairs and similar, that may systematically approach the issue of NEET employment and entrepreneurship; 
  • non-governmental organisations that deal with youth (e.g., NGOs collaborating with universities, such as AIESEC, IASTE, ELSA),
  • or networking spaces, such as community centres, leisure centres, co-working spaces, etc. 

Make the most of existing information:

The history of the project could be traced through such documents as: 

  • Planning documents, especially Project Logic or Logframe, grant application documents etc.
  • Communications – emails, records of phone conversations between partners 
  • Original timelines and budgets 
  • Business or strategic plans 
  • Minutes of meetings 
  • Evidence of community consultation 
  • Memos, and 
  • Financial records.

Other information could be gathered by interviewing people who have been involved since the early days of the project. 

You could find out what people remember about the beginning of the project: 

  • their initial expectations and motivations to join the project, changes in expectations in the course of the project and to what extent the expectations were met 
  • early roles and responsibilities 
  • expected and actual challenges and 
  • proposed ways of addressing these.

In this way you can build up a picture of how the project has evolved and if it is still serving its originally intended purpose. 

There are many different methods of gathering data from different sources of information, each with its own advantages and disadvantages, such as desk research, interviews, case studies, observations, and surveys. See the section 9, ‘Gathering Information’, for more ideas on how to collect information from a variety of sources.

8.9 Design of impact evaluation 

Impact evaluation is the type of evaluation that focuses on factors which caused the observed change in target group of the evaluated project. Using a combination of the following strategies can support the conclusions drawn (Peersman, 2015):

  • estimating what would have happened in the absence of the evaluated project, compared to the observed situation,
  • checking the consistency of evidence for the causal relationships described in the project logic framework,
  • ruling out alternative explanations, through a logical, evidence-based process.

There are three designs that allow for implementing these strategies. Experimental designs and quasi experimental designs are based on the principle of comparing the situation before and after an intervention in two groups – a treatment group consisting of participants benefiting directly from the evaluated project and the other group, which includes individuals of similar characteristics who were not supported by this intervention/project.

  1. Experimental designs – in which ‘the other group’ is called a control group and the assignment to this group as well as to the treatment group is based on random mechanism. Due to these features this design is often called randomized controlled trials (RCTs). 

The main precondition for an RCT is that the number of individuals interested in your project is greater than the number participants you can provide support to. The treatment group and the control group should be similar in terms of their features as age, education level, employment status, etc. Randomized selection can be conducted in various ways, e.g., computer generated assignments, or a lottery. The main principle is that all individuals have an equal chance to be selected to both of the groups.

  1. Quasi-experimental designs – in which ‘the other group’ is called a comparison group and is constructed by using various techniques to secure optimal similarity or controlled difference to the treatment group. A selection to both groups is based on non-random mechanism (e.g.  the compared groups contain only the people who were close to the project admission border, and they are selected from the project recipients and from candidates who were not included in the project). 
  2. Non-experimental designs – which look systematically at whether the evidence is consistent with what would be expected if the intervention was producing the planned impacts (e.g., sequence and timing of project activities and effects go as assumed by project’s logic), and also if non-project factors could provide an alternative explanation to the observed effects.

 

9. Gathering Information

Figure 3: Evaluation of Youth Entrepreneurship Support Actions Life Cycle – Phase 2

This section of the toolkit looks at: 

  • Different ways of gathering information 
  • How you might benefit from a particular source of information by using respective evaluation methods. 

An evaluation can use quantitative or qualitative methods, and most often includes both as they can complement each other and mutually balance their weaknesses (CTSA, 2011). 

9.1 Quantitative methods

These methods provide quantitative information concerning which measures the scale (how much), the intensity (to what extent) and frequency (how often) of the examined phenomena, for example effects of the project implementation (e.g., the number of people who used knowledge and skills acquired in the project in order to develop and implement their business plans.

Quantitative data can be collected by surveys (using e.g., questionnaires, pre-tests and post-tests, observation, review of existing documents and databases or by gathering clinical data), using various communication channels. Especially the COVID-19 pandemic resulted in wider use of on-line communication channels, methods (e.g., computer assisted web interview – CAWI) and tools (e.g., mentimeter). 

Surveys 

Surveys that are based on self-administered questionnaires (paper or online) are quick and inexpensive ways of finding out what people think, do and what is their situation. Surveys questions can be also posed by pollsters during telephone or face-to-face interview, but such methods are much more expansive.

For example, you can survey project recipients to see what they have gained from the project and what they have gained as a result of this exposure to business or industry experiences. You should survey project participants to find out what they have learned from participating in the respective activities, including those developed by themselves within the project, and if they have any suggestions for improving either outcomes or processes. You may also survey parents or guardians of the youth participating in a project to see what impact the project has had on their children. If you want to show the change caused by the evaluated intervention, the information should be collected before and after the implementation of a project (e.g., the level of knowledge or certain skills that were developed in the project). 

Remember that the questions in your evaluation tools (that are self-administered questionnaires or questionnaires navigated by pollsters) must first of all be strictly connected with the chosen evaluation criteria and evaluation questions. Once your evaluation tools are ready, you can conduct your own online survey using free online software.

The quantitative data are easier to analyse than the qualitative ones and can be generalized. The generalization means that the findings can be applied to a wider group (population) than those participating in the survey (sample), provided that the sample is big enough and accurately represents the population. If collected properly, the data are reliable, and their precision can be estimated. However, collecting quantitative data can be challenging due to difficult reach to sampling frame data and contact details of sampled people, as well as difficulties in reaching respondents in the sample, their deficit of time and motivation, not to mention serious consequences of imperfections in the questionnaire design. Also, there are limitations in the type of information you can get from the quantitative data. It does not provide you with insights and explanations of the context and more complex problems, such as causes and consequences of the studied issues. 

To show the change caused by the evaluated intervention, the data should be collected before (pre-test) and after (post-test) the implementation of a project, for instance the level of knowledge or skills that were developed in the project, number of businesses that were registered. For that purpose, various tests and databases can be used. 

9.2 Qualitative methods 

These methods serve a better knowledge, deeper understanding and explanation of the project process and its effects. Qualitative data can provide you with in-depth information answering questions asking about what happened, in what way went well and why (e.g., what went right and wrong or what did not go so well and why, and what were the factors that caused these effects) contributed to the result achievements, etc.  

To collect qualitative data, you can use in-depth interviews (individual or group ones, so-called focus groups), observation, and case studies as well as desk research that is analysis of documents including testimonials, diaries, staff journals, logs, etc. Let’s review them briefly.

 

Individual interview 

An interview is basically a conversation and can be more or less structured (based on a scenario with a prepared set of questions), semi-structured (with prepared questions that you might add to, adapt or omit as the interview progresses) or unstructured (where interviewees talk with only the occasional prompt questions from an interviewer). 

Interviews can be adapted to fit in with the availability of your interviewees – during project hours, face-to-face or by phone or internet communicator, onsite or offsite. 

How you conduct your interviews will depend on the project you are evaluating. For example, participants in a mentorship or apprenticeship project might have less formal, unstructured conversations about how the activities or project or sessions are going. When interviewing participants in an idea phase of the project you may want to use semi- structured interviews by phone or in person. 

Interviewees can be given some guiding questions (but not the actual scenario – set of interview questions) beforehand as this allows them time to think about their responses.

 

Focus group 

A focus group is a kind of an interview in a small group (5-8 persons) with a moderator and a group of participants. Focus groups usually invite a range of people to discuss a common topic. They can be useful if you want to bring together, say, a few participants, parents, staff, trainers, mentors, project stakeholders or partners to discuss their different perspectives on a project, its impact and how they think it may be improved. 

It can help to have one person facilitating the discussion (a moderator) while a second person takes notes. The most convenient way (same as for individual interviews) is to ask interviewees for consent for recording). Then, you can summarise conducted interviews in writing, or verbally, the findings from the discussions.

 

Observation 

Observation is data collection method based on careful and systematic experiencing (by seeing, hearing) various events and phenomena. It can be supported by observation sheet or a checklist that focuses the observer’s attention on certain issues, such as the activity level and interactions between training participants (Bartosiewicz-Niziołek, M., Nałęcz, S., 2021). Observation is data collection method based on careful and systematic experience of various events and phenomena by observing participants’ behaviours in a natural situation. It can be conducted without any tool (free description observation) or be supported by an observation sheet. This tool focuses the observer’s attention on certain issues (e.g., activity level and interactions between training participants, trainer’s engagement, a way of presenting issues, involving trainees, and conducting training). 

You can also use observation checklists or make notes to record activities of participants or staff as well as their responses. An observation sheet enables you to record the dynamics of the interaction between participants and staff or among participants. To measure participant engagement, you might use a simple yes/ no or tick/cross system to record participants’ body language or verbal comments, the kind of questions they are asking, the amount of confidence being shown, or the extent to which they are following staff or mentor suggestions. You also can use a simple tally to record, say, the number of participants engaged in activities at various points during the course of the project.      

An observation checklist/observation sheet should be focused on the information needed to answer the evaluation question and ought to be prepared some time before coming to the observation site.

It is a good idea to gather other sources of information besides observed information for a more complete picture of participant outcomes.

 

Case study 

A case study is an in-depth analysis of a particular story or experience. It can be a powerful illustration of how an activity or project has had an impact at an individual, chosen project or activity. Case studies capture the story behind the statistics or other information and are particularly effective when combined with various sources of information. 

For example, a case study might concern a particular participant, highlighting the skills and knowledge gained, and the benefits of participation identified by these persons. By bringing together the participants’ voice with other evidence, it is possible to build up a broad picture of the impact the project causes and understand the processes which contributed to the results. This other evidence could include comments you have gathered from the business coordinator of the project, other business staff who contribute their skills, mentors and several parents or guardians of the participants involved.

 

Desk research (analysis of documentation)

Documents can be a useful source of information about the beginnings of your project as well as provide information on the activities carried out, project beneficiaries and their progress etc. Keep in mind the purpose of the evaluation when selecting materials for reviews. Especially, in the case the logic framework is missing, such documents can help you develop it at later stages of the project. 

Relevant materials might include early planning documents such as business or strategic plans, communications between partners and advisory services, financial records, minutes of meetings, letters of agreement or memoranda of understanding and the like. 

If you are already in a well rolled out project spanning a period of time (or has been active for a number of years consecutively), it is likely you will have access to some of these documents and they can be included in your evaluation (e.g., reports, needs assessments, knowledge or skills tests, trainers’ and trainees’/interns’ write-ups, checklists). Also, you may gather information from individuals who have been involved in the project from the beginning. You could find out what they remember about the reasons for starting a particular project, session, or other activity, what they hoped to achieve and how well they think the early objectives and expectations have been fulfilled.

One of the simplest documents you can use is a checklist that is a quick and easy way of recording basic information. It is used when there are specific items or actions in a project to be recorded.

You can use a checklist to: 

  • Summarise actions or activities that participants have engaged in, 
  • Track participants’ progress over time,
  • Record observations – more on these below 
  • Show the tasks a participant has completed, or which aspects of a task have been done. 

Checklists may be maintained by project or support action staff, by participants or by all parties. An example of a checklist can be found in the later pages of the toolkit.

If you are just starting out in a youth entrepreneurship support action, keep your early records that provide relevant information about the project. They can be a useful source of information to track progress of your activities or interventions.

When analysing documentation, you can also use desk research method (analysis of such documentation as diaries, journals and logs including personal reflection) that can give you a rich picture of people’s experiences. Entries can show how a recipient’s thinking or understanding has changed as a result of participating in a project. They might show what the participants were expecting at the beginning and whether or not these expectations have been met. They can show if a trainee/intern has become more confident or gained particular skills or knowledge. Staff journals can also track student development or the staff’s own growth in professional understanding. In turn, partners may choose to keep a log of meetings and activities to track progress or document what has happened. These reflections can be less or more formal. This kind of information may be especially useful for evaluations focused on the change or growth over time. It is worth remembering to obtain consent from the owners of these diaries, journal, and logs to ensure compliance with the General Data Protection Regulation.

You can also make use of a testimonial or reference that enables people outside the project to show their support for it. Testimonials offer evidence about the strengths and weaknesses of a project. They are typically, but not always, a response to your request. Like case studies, testimonials might provide a more complete picture of what has been achieved from the perspective of a project participant or even someone outside the project. A letter from the local council, appreciating the use of an innovative business solution to solve a community problem, for example, might be used to demonstrate the impact of the project in a particular area. Another example could be if a local chamber of commerce passes on their feedback from businesses regarding the conduct and performance of participants during mentorship or apprenticeship activities delivered as part of a collaborative project.

 Moreover, visual records can be also analysed as a part of desk research. These recordings may serve as evidence of engagement in an activity or show the impact of a project on participants. Photos and videos can set the scene for an evaluation by showing the environment in which the project is taking place and the people involved. Depending on the nature of the activities that participants are involved in, photos can show progress towards an end product and the end product itself. Consent (where necessary) should be obtained from participants (and from mentors and other stakeholders) prior to taking these images. Where applicable, it must be taken into account who and for what reason the images were taken to give the images context.

 

Skills assessments

In order to show participants’ progress, some youth entrepreneurship support action implementers may also routinely conduct tests and other assessments that can be used within desk research. There are several tests that could be taken in these circumstances that can show student performance levels before and after engagement in a session or series of activities. However, it may be difficult to show there has been a change in performance as a result of your project activities alone. Keep in mind the external factors that might have contributed to or counteracted in achieving the observed results. You can find more on skill testing in the section 12.4.

9.3 Advantages and limitations of some common information collection methods 

You may not be entirely sure about which information collection methods you should use for your evaluation. Some will be more suited to your purposes than others. 

To help you decide what information would be best for your evaluation, look at the following table, which shows the most important advantage and one key limitation for each information collection method. However, it is very important to remember that in evaluation research you should always use both, qualitative and quantitative methods in order to obtain complementary information and thus, stronger evidence and the full picture of an evaluated project or its effects.

Table 6: Advantages and limitations of collection methods

Information Collection Methods Main advantages Main limitations
Individual interviews Allows you to capture a range of in-depth participants’ perspectives Can be time consuming when it comes to conducting and analysing the data from the interviews; requires skilled interviewer
Focus Groups Interactions among participants provide a wider perspective and enable confrontation of various opinions Can be difficult to arrange (face-to face version), and time consuming (as above); requires skilled moderator
Case Studies Serves as an example and can provide an in-depth and holistic picture of impact on participants These are about particular instances and not generalisable
Observation Allows you to analyse the beneficiaries’ reactions in natural context to see participant involvement first-hand Shows only the observable       behaviour not what participants are thinking
Desk research (analysis of documents, such as diaries, journals, and logs, testimonials, photographs, and videos, tests and other assessment) Can give a context to the      project, including early expectations and objectives Early planning documents can sometimes be hard to find
Surveys  A reliable, comparable, and generalizable, as well as quick and low-cost way of capturing views, behaviour and facts  Does not allow for more in-depth perspectives

Source: own elaboration 

Remember that in evaluation research different kinds of methods should always be used together as they complement each other. The use of both, qualitative and quantitative methods enable overcoming their limitations.

9.4 EVALUATION TOOLS 

Evaluation tools are assigned to respective methods of collecting information from given sources. See the table below to find out what tool corresponds to a given method.

Table 7: Overview of evaluation methods and tools

EVALUATION METHODS EVALUATION TOOLS
Quantitative method – survey
  • Self-administered questionnaire (paper, online)
  • Questionnaire navigated by a pollster

Qualitative methods:

  • Interview (individual, group one)
  • Observation
  • Desk research

 

  • Interview scenarios
  • Observation sheet or check list
  • Instructions for the analysis of documents

Source: own elaboration

Most often evaluation tools are prepared from scratch according to the evaluation concept as questions included in tools must correspond to evaluation purpose, criteria, and questions. Some exemplary questions that you can use in order to develop your own tools were mentioned in section 8. 

10.Analysing Information

 

Figure 4: Evaluation of Youth Entrepreneurship Support Actions Life Cycle – Phase 3

This phase is about analysing the information you have gathered in order to make sense of it. Thanks to this phase you should learn the answers to your evaluation questions. This is a critical step toward identifying whether the evaluated project has made a difference and achieved the desired outcomes and impact, met recipient’s needs, was useful and effective. 

You will need to analyse information that is both quantitative and qualitative in nature. Quantitative information is essentially numbers, such as first of all data collected with using surveys, but also test scores, attendance and/or absenteeism figures, or the number of participants who meet minimum benchmark standards. Qualitative information, on the other hand, captures people’s views, observed behaviours, experiences and perspectives. 

By analysing a variety of information and using different methods you can get a reliable picture of your project and highlight any measurable improvements it has brought about, as well as capture how stakeholders feel they have benefited.

Analysing your data by looking for patterns or trends may tell you what elements if your project has been a success and which one you could change to improve your outcomes. For example, if you and your partners run a project to enhance participant engagement you could track their performance over time by analysing the changes in attendance or final outputs and outcomes. 

To fully understand the impact of your project, you may also want to analyse feedback from the project staff or professional advisors and mentors on student attitudes. If you are an entrepreneurship academy tasking with NGOs to provide mentoring support to participants to an entrepreneurship course, you opt to measure success both in terms of better participant engagement and staff attitudes about their employer and task.

10.1 How to analyse the collected information 

There are 4 steps in the data analysis that need to be conducted to be able to draw findings and interpret the results of evaluation. 

  • Anonymization – make sure that any personal or sensitive data are removed from the data sets (e.g., names, contact details, dates of birth, etc.),
  • Check the quality of information – completeness and consistency of the data, correcting obvious mistakes, removing information in case their accuracy cannot be verified, selection of data relevant for the evaluation,
  • Coding and categorizing the data using a set of codes (e.g., symbols or names of the categories of data),
  • Analysis and interpretation. 

 

What are the steps for analysing qualitative data?

Analysis of qualitative data includes examining, comparing, and contrasting, and interpreting patterns and consists of the following steps: identification of themes, coding, clustering similar data, and reducing data to meaningful and important points, such as in grounded theory-building or other approaches to qualitative analysis (NCVO a), n.d.). Here are the main steps of this analysis:

  • Carefully read through and get to know your data.
  • Identify themes or categories that are relevant for your analysis. Name each of the categories (the names represent the “codes“). The codes can be defined prior to the analysis or when you start working with the data. 
  • Go through the transcripts of interviews, focus groups or open-ended responses in surveys and highlight key quotes. Provide the corresponding code to each highlighted quote. If there are more person’s coding, try to code and compare some parts at the beginning. This way you can ensure the data is coded in a consistent way.
  • Create groups of quotes (cluster them) with the same code. 
  • Identify specific patterns that have become obvious after clustering the data.  You can also realize; additional data needs to be collected or the validity of some data needs to be confirmed using other sources of information. For example, if there are contradictory quotes in the data you collected, you might decide to contact the facilitator of the training to obtain additional data. Also, you might need to do additional analysis. For example, some categories need to be divided into sub-categories. 
  • Use relevant quotes to describe findings and interpret them.

 

What are the steps for analysing quantitative data?

Analysis of quantitative data is based on performing summary collected from questionnaires and tests. Before you can do so, you need to make sure your data is in a format you can use. This will depend on the software you plan to use (e.g., MS Excel or google spreadsheets). If you have, for example, collected information using paper questionnaires, you will need to enter these into a spreadsheet or database. Then remove any mistakes, e.g., blank responses, duplicates, any obvious errors and make sure that each of your variables is in the right number format, e.g., dates are formatted as dates, numbers as numbers, amounts of money as currency (NCVO b), n.d.).

Please see the table in the section 12.5.3 and 12.5.4 that provides descriptions of common techniques and calculations you could use. For these simple calculations you can use a spreadsheet program like MS Excel to organize and analyse your data. After completing the initial calculations, you may identify areas where more detailed analysis could provide you additional insight.

In the next step you should think of the form to present the data. Keep in mind that the presented data should be easy to present. In general, presenting your data in a table or a chart is recommended, especially in the case of data that is most important for people using your evaluation. 

Now you should be able to draw key findings. There are some things you should consider (Cottage Health, n.d.):

  • Is 80% (for example) good or bad? How do you know? You may be able to decide on this by comparing your data to the previous year’s data, or to other similar interventions.
  • Are there any other patterns, themes, or trends? For example, does one group consistently achieve more, or less, than other groups?
  • Can you explain some of the less common responses? You may need some qualitative analysis to help you here.
  • Is there anything in the data that has surprised you?
  • Do you know anything about why some of the results are as they are? For example, can you link your percentages to qualitative data that explains why some people achieved an outcome while others did not?

 

10.2 What to keep in mind when analysing collected information

Tips for analysing information 

  • You can convert your information from words into numbers to make analysis easier. This might be done by categorizing responses under a set of headings or themes that capture the relevant aspects of the project.
  • This approach will allow you to summarise large amounts of information, such as frequencies (counts), per cents or rankings. It also lets you look at more than one characteristic (or ‘variable’) at a time if you want to know how they might be related. For example, you might be interested in looking at how attendance in a project or achieved outcomes differs for girls and for boys. 
  • Depending on the quantity and kind of information collected you can analyse this yourself, seek help from a colleague or partner, an external expert/ organisation specialized in data analysis, or use a statistical software package to help. There are also other computer programs for analysis of quantitative and qualitative data – both of which can be expensive and require some advanced skill to operate.
  • In some cases, you might be able to compare your results with national or regional results. However, if you do take this approach, remember that any comparison needs to take into account the context in which the data were collected.
  • Patterns, trends, and themes can help you answer your evaluation questions, identify any unanticipated outcomes, and reveal possible gaps in your information. 
  • Sometimes your analysis will involve comparing what people say or write at the beginning of an activity or project with what they say and write afterwards. (Such pre- and post-testing information collection can use quantitative or qualitative data.) 
  • Your project might even lend itself to more regular collection so consider ways that data can be collected at multiple points during your project and then analysed to best identify your achievements. 
  • Your analysis should be ‘fit for purpose’, that is, it need only be suitable to provide the evidence that your project has or has not achieved the desired outcomes. For a one-off project this might be fairly basic pre- and post- comparisons, showing if and how things have changed. For longer-term projects, or where further funding is being sought, you might need more depth to your analysis to identify if there are sufficient measures of success to justify continuing your project and project.

 

How to ensure Ethical Data Management

Now that you have collected all this data and are looking forward to using it, it is important that you follow the legal provisions that pertain to ethical data management. Below is an excerpt with how you could ethically manage the data that you have gotten from your evaluation activities.

Following Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), there have been implemented a full-fledged process of mapping the GDPR-based requirements, with the purpose of aligning the project`s interventions to the applicable legislative corpus. 

The ethics appraisal process is breveted to fully transpose the Article 19 of Regulation of Establishment (EU No 1291/2013), stating that all the research and innovation activities carried out shall comply with ethical principles and relevant national, Union, and international legislation, including the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights and its Supplementary Protocols. 

Considering the large-scale data management pertaining to the evaluation related activities, the implementation phase is ensuring that the data collection process is aligned to the acquis communautaire, with due respect to the techniques for exploiting data falling under the GDPR incidence. For this purpose, you are strongly advised to appoint an in-house Data Protection Officer, in charge of safeguarding the fully compliant data processing.

 

10.3 Drawing conclusions and recommendations

 

What is the difference between findings, conclusions, and recommendations? Follow the steps described below to draw correct, evidence-based conclusions and recommendations (Kuhn, 2020). You can use the table used when analysing the data and simply add new columns for findings, conclusions, and recommendations:

Findings are validated data without any interpretation. Using expressions based on the data, e.g., “x number of females out of y number of participants attended the training” keeps your findings as clear and undiluted as possible. In the case of qualitative analysis, it can be hard not to jump directly to conclusions. However, anything else at this stage could be just an unfounded assumption or hasty conclusion that can harm the validity of the analysis. 

Once the findings are complete, the person analysing the data can move to the next step – drawing conclusions. Conclusions answer the question “so what?” that can arise when looking at the findings. Conclusions should be accompanied by interpretations or explanations, created by looking at the findings in the bigger picture. The more evidence-based explanations you can provide, the easier it will be to translate them into data-driven reports.

The next step in the analysis chain, answering the “now what?”. To build on what works or improve what does not in a project, what would be recommended to do? Recommendations need to be specific and actionable.

 

Example 1 – too broad recommendation: Invest more resources in staff

 

Example 2 – a good recommendation: Invest more resources in improving operational enabling environment factors, specifically on capacity-building for staff around project management and business management best practices. If capacity building efforts are paired with establishing and training on standard operating procedures for core communications and operations processes, this can greatly improve efficiencies.

11.Using Information

Figure 5: Evaluation of Youth Entrepreneurship Support Actions Life Cycle – Phase 4

 

This phase in the cycle is about sharing your evaluation results and making some informed decisions about your project and its future directions. 

11.1 Sharing your findings 

  • Sharing may be for compliance purposes, but it can also be a valuable opportunity for others to have their say about the evaluation findings. 
  • Sharing may help stakeholders engage with the project, influence the next steps of the project, or influence the partners to go back and have another look at their analysis. 
  • Who you decide to share your findings with, and how you choose to do so, will depend on the expectations of each party in the project and on your reason for evaluating in the first place.
  • If funding is involved and you need to satisfy a group or organisation, you may want to write up your evaluation findings providing answers to the questions of interest with that of this audience. 
  • If you want to show the world or just local community, what a great project you have then the findings can be made public. For example, you could put a summary in your respective newsletters or annual reports or website.
  • Visualisation of evaluation results can help deliver the message to the recipient. Well-designed infographics can illustrate the main features and relations in a simple and clear way. Of course, the type of visual elements should be selected according to the type of the recipients.
  • Share data with trainers, mentors, and lecturers. They can thus improve their services and change them.

11.2 Making informed decisions 

You have conducted your evaluation, shared your findings and are now in a position to make informed decisions about whether to go on as you are, or to make changes to your project, or both. 

For example, an evaluation might show that your evaluated youth entrepreneurship support action      should continue to work collaboratively with the same partner but on a new challenge or opportunity. Or an evaluation might show that a similar activity could be carried out but with a different partner or set of mentors. 

This is the point where you and your partners need to talk openly about the future of the evaluated youth entrepreneurship project.      

Below are some examples of questions that could guide this stage in your evaluation:

  • Will you continue to do things in the same way? 
  • Do you need to evolve or enrich the project? 
  • Could your model be replicated elsewhere? 
  • Has the project run its course? If so, what is the best way to finalise the project? 
  • How will you convey your decision to stakeholders? 
  • Which part of the project was the most beneficial for recipients and which the least?

12. Toolkit

This Toolkit contains templates, tables, and checklists that you can use to help you at different stages in your evaluation (the list is not exhaustive).

  • Evaluation and Development Cycle Checklist
  • Tools to use in the Preparing phase of the evaluation
  • Tools to use for Gathering Information
  • Tools to use for Analysing Information
  • Tools to use for Using Information
  • Soft skills required in successfully conducting a youth entrepreneurship project

 

12.1 Difference between evaluation, monitoring and audit

Table 8: Monitoring, audit, and evaluation

Monitoring Evaluation Audit
Definition Ongoing analysis of project progress towards achieving planned results with the purpose of improving management decision making Assessment of the efficiency, impact, relevance, utility and sustainability of the project’s actions.

Assessment of:

– The legality and regularity of the project expenditure and income

-Compliance with laws and regulation

-Efficient, effective and economical use of project funds

Who? Internal management responsibility External firms (research/consulting), internal specialists or self-evaluation Usually incorporate external agencies
When? Ongoing Usually at completion (ex-post) but also at the beginning (ex-ante), mid-term, and ongoing. Ex-post, completion.
Why? Check progress, take remedial action, and update plans. Learn broad lessons applicable to other      projects and projects. Provides accountability and development. Provides assurance and accountability to stakeholders.

Source: European Commission – Europe Aid Cooperation Office, 2004

12.2 Evaluation and Development Cycle Checklist

Table 9: Evaluation and Development Cycle Checklist

Addressed

Yes (ii)

No (x)

Or Unsure

PREPARATION

  • What do you need to find out? What information will help you?
  1. Do you have clear and shared objectives for the evaluation?
  2. Is there a key evaluation question (or questions) to guide your evaluation? 
  3. Have you identified the information that you need to gather? 
  4. Do you have the skills and knowledge to gather it? Who will conduct the specific evaluation tasks? Is any external support needed? 
  5. Do you know your stakeholders? Do you know if, how and under what conditions they can be involved in the evaluation? Do you know their expectations of the evaluation? 
  6. Do you know what relevant information is already available?
  7. Do you know who can help you to find the information you need?
  8. Will each ‘partner’ have an opportunity to contribute to the evaluation?

GATHERING INFORMATION

  • How will you gather your information?
  1. Is your focus on gathering relevant information rather than a lot of      information? 
  2. Do you know with what methods the information is to be gathered (quantitative, qualitative)?
  3. Have you established a process for gathering information (existing and/or additional)?
  4. Who will collect it and how long it will take?

ANALYSING INFORMATION

  • How will you analyse the data? 
  • What quantitative and qualitative data do you need to analyse? 
  • What analytical methods will you use? 
  • Do you have internal capacities to analyse the data? 
  • What does the information you have gathered tell you?
  • Have you identified the main themes, patterns and trends (over time)? Are you clear about the main outcomes from your project?
  • Are there any additional (i.e. unanticipated) outcomes from the project?
  • Have you identified ways in which your Youth Entrepreneurship Support Action      might be improved?

USING INFORMATION

  • What can you learn from the evaluation results? 
  1. Does your project respond to the needs of its recipients/local communities?
  2. To what extent the objectives and expected outcomes were achieved?  
  3. What needs to be improved in our existing or future projects? 
  4. What internal processes need to be improved? 
  • Have the communication channels and tools to share and promote what you have learned?
  • Is there scope to expand or build on the project?
  1. Have you provided relevant feedback to your key stakeholders (answer to the questions of their interest in a written or verbal form)? Have participants in the relationship been invited to discuss the findings (where necessary)?
  2. Is the status of the project – complete, ongoing etc – understood? Have the stated objectives for the evaluation been achieved?
  3. Do you need to make changes to the project?
  4. Have you agreed how you will proceed next with the project?

Source: own elaboration 

 

12.3 Tools to use in the phase of preparing the evaluation

12.3.1 Logical framework matrix 

 This is a tool with which you can reconstruct the logic behind the project you want to evaluate. 

Figure 6: Logical framework matrix

pastedGraphic.png

Source: Department for International Development, 2007

The table below provides additional information on the elements of the Logical framework matrix.   

Table 10: Difference between inputs, activities, outputs, outcomes, and impact

Inputs/ resources Inputs are those things that we use in the project to implement it, including human resource, finances, or equipment. Inputs ensure that it is possible to deliver the intended results of a project.
Activities Activities are actions associated with delivering project goals. In other words, they are what the personnel/employees do in order to achieve the aims of the project.
Outputs These are the first level of results associated with a project. Often confused with “activities”, outputs are the direct immediate term results associated with a project. In other words, they are usually what the project has achieved in the short term. An easy way to think about outputs is to quantify the project activities that have a direct link on the project goal. For example, project outputs could be: the number of community entrepreneurship training carried out, the number of meetings with successful entrepreneurs or the number of educational publications published
Outcomes This is the second level of results associated with a project and refers to the medium-term consequences of the project. Outcomes usually relate to the project objectives.  For example, it could be the improved level of knowledge about business-related topics or establishment of a start-up.  Nevertheless, an important point to note is that outcomes should clearly link to project goals.

Impact
It is the third level of project results and is the long term or wider than project participants consequence of a project.  Most often it is very difficult to ascertain the exclusive impact of a project since several other interventions can lead to the same goal. An example would be increased number of new businesses in the region, reduced unemployment or NEET rate in given a city.

Source: Difference between inputs, activities, outputs, outcomes and impact, 2013

12.3.2 Candidate Outcome Indicators

Indicators can be developed for various stages of your project. The figure 7 provides examples of outcome indicators of an Employment Training/ Workforce Development Program (The Urban Institute, What Works, n.d.). 

Depending on the complexity and length of your programme, you can divide your programme into several stages and develop outcome indicators for individual stages of your programme. 

igure 7: Example of candidate outcome indicators

pastedGraphic.png    

Source: The Urban Institute, What Works, n.d.

12.3.3 Identifying your evaluation stakeholders

In any Youth Entrepreneurship Support Action there can be multiple stakeholders. Knowing your stakeholders can help you identify what kind of questions they would like to answer with the findings of the evaluation, in what form and when they may need it. 

This stakeholder inventory is intended to help you identify who may be interested in your evaluation. Think about who needs to participate in the evaluation and who you will need to share the findings with so that the recommendations can be implemented.

Table 11: Stakeholder inventory

Potential stakeholders of the evaluation Yes / No
Project recipients (who presently participate in the project )
Graduates (recipients who already finished participation in the project)
Trainers
Mentors
Innovation Hubs (representative)
Business Accelerators (representative)
Private Equity and Venture Capital Firms
Project staff
Coaches
Angel Investors
Researchers
Local Administrative Authorities (representative)
Project partners
Volunteers
Other institutions executing similar projects (representative)
Local employers 
Who else? (Add those that specifically apply to you)    

Source: own elaboration 

Having identified your stakeholders, you should also think about the added value they could bring to your evaluation and their motivation to get engaged in the process. Based on this information you can decide about the type of information, key messages, as well as communication channels to be used. 

Stakeholders can be categorized into 4 groups in terms of their influence, interest, and levels of participation in your project (Product plan, n.d.).

Figure 8: Power-Interest Grid

pastedGraphic_1.png

Source: Product plan, n.d.

  1. High power, high interest: the most important stakeholders. You should prioritise keeping them happy with your project’s evaluation – keep them informed, invite them to important meetings, ask for their feedback.
  2. High power, low interest: Because of their influence, you should work to keep these people satisfied. But because they haven’t shown a deep interest in your project, you could turn them off if you over-communicate with them. You should consider the frequency of contacting them and the amount of information provided.
  3. Low power, high interest: You will want to keep these people informed and check in with them regularly.
  4. Low power, low interest: Just keep these people informed periodically, but don’t overdo it.

12.4 Methods and tools to use in the Gathering Information phase of evaluation

Below you will find a number methods and tools that you could use in this phase of evaluation.

12.4.1 Identifying the appropriate methods to collect the needed information

Keeping in mind the purpose of your evaluation, identify the information collection methods and tools that will best suit this purpose. Summaries of each kind of the tool and method collection can be found in the section 9 “Gathering Information” above.

Table 12: Checklist to identify which methods can be used to collect needed information 

Method of data collection Which Information is most relevant to your purpose? Which Information will be easiest to collect? Which Information will you need some help with, to gather? What kind of information will you need? (eg. locating documents, identifying participants, analysing information, dissemination activities) Who can give you this help? (eg. a colleague, independent experts, a partner’s colleagues, project participants, an external evaluator)

Desk research (document analysis/ review, such as reports, checklist, 

diaries, journals and logs, testimonials, tests and other assessments)

Observation
Individual interviews
Focus Groups
Case Studies
Skill assessments (e.g., tests, self-assessments)
Survey

Source: own elaboration

12.4.2 Personal potentials and objectives – self assessment tool 

This tool is important in assessing the participant’s potential and objectives to find the right fit for them in the project and also be able to help them harness their potential. It is a guided enquiry where some of the participants (especially victims of forced migrations like refugees, immigrants, or people with disabilities) may not be able to understand all questions and may require guidance from the instructors to provide the required information.

These and personality assessments were inspired by tools available at the platforms mfa-jobnet.de and Psychometrics (visit www.mfa-jobnet.de and https://www.psychometrics.com) and with necessary modifications to fit this evaluation module, however the original source entails more tests that can help you assess the employability of a candidate. 

 

Table 13: Self-assessment tool to assess personal potentials and objectives

Education/Skills
In which areas have I been active in my life so far? What have I learned there?
What have I learned there?
What skills have I gained from this? What skills have I developed in my everyday activities? What general competencies have I gained through my professional and extra-professional experiences?
What professional or other training do I have? 
What diplomas, certificates etc. do I have?
Which of my degrees are recognized in the country?
Which are not?

Where have I worked before? 

a) In the home country

b) In another EU state     

c) In other countries

What other work experience do I have?

What in the training/work has particularly appealed to me, what have I been interested in, what have I been enthusiastic about?
What other qualifications do I have (e.g. language courses, computer training, driver’s license etc.)?
What skills and competences have I acquired in different places and countries? (Not only professionally, also in everyday life)
Where do I use them today?
What do I want to learn?
Which of my skills can I put to good use in the country I live in?
Are there areas in which I have completely different skills than current     colleagues and friends?
Is there a profession would I like to take up now?
If no: What do I want to do for living      instead?
What expectations and ideas did I have when I came to the country?
What is different than I expected?

Is that 

a) a disappointment?

b) a challenge for me?

How can the contacts I have be useful to me in implementing my ideas and plans?
Self-assessment
What have I successfully achieved in my life?
What have I always been particularly good at?
What has always been fun for me?
Which of my competences/skills relevant for my future professional life do I want to develop further? What do I want to become even better at?
What do I not like doing at all?
Which of my skills were particularly helpful/important/successful during my stay in this country?
External assessment
What abilities/strengths do others see in me?
What have I been able to inspire / impress others with?
For what have I received a lot of praise / recognition?
Work
Which 5 characteristics should a job have so that it fits me?

How do I concretely imagine my future work? 

Where? With whom? What exactly is my work?

What is my goal?
What can I do to achieve my goal?
What do I need help/support with? What exactly do I need?
Which difficulties/obstacles can appear on my way?

Age:

Gender: 

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

 

12.4.3 Self-assessment sheet:  Social skills and personal values

This questionnaire is suitable for pre and post measurement of a participants’ social competences and attitudes to assess their progress in the project. It should however be noted that self-assessment questionnaires are not the only tools for measurement of progress as they may be subjective and sometimes the change measured in this kind of tools may show a decrease only because the trainings made the beneficiary aware of his/her weaknesses in particular area. However, self-assessment can be complemented by other forms of external assessment done by project staff (for instance trainers, mentors, psychologists). 

Table 14: Self-assessment sheet to assess social skills and personal values

Name, first name:

Date:

++ + 0
Social skills
Communication skills
I pay close attention to what and how others say something.
Others tell me that I understand them well.
Even in larger groups I can express my opinion in a way that is understandable for everyone.
Team spirit
It is important to me that a team works well: that is why I share important experiences and knowledge with my team colleagues.
But I also like it when I can learn from others.
If it is important for the group, I can put my personal interests last.
I actively participate in group work, e.g. by considering how best to divide the work.
Ability to deal with conflicts
Before I get too upset about something, I prefer to talk about things in a quiet minute and in a calm mood.
I have no difficulty with it when other people have a different opinion than I do.
In conversations I can easily tell whether it is a factual problem or whether two people personally do not get along well with each other.
When conflicts arise, I mediate or work towards a solution that all parties can live with (I don’t just want to win myself).
Ability to take criticism
If someone criticizes my performance or even individual behaviour, I think about whether they could be right.
If I have something to criticize about someone else, I explain it very specifically in a friendly tone.
I understand that other people sometimes make mistakes.
Dealing with people
I like to approach other people.
I usually remain calm and objective even when other people get on my nerves.
If I notice that someone else is getting upset, I can calm them down.
Personal values
Reliability
I always arrive punctually for appointments.
If I cannot keep an appointment, I apologize in time.
I always deliver work orders on time.
I don’t need to be constantly monitored: if I have a task, I think about fulfilling it myself.
Sense of responsibility
Of course, I take responsibility for what I do.
I already take care of my health.
I am very careful not to put anyone in danger.
I handle the equipment or materials entrusted to me with great care.

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

 

12.4.4 Questionnaire for self – assessment of own competences

This questionnaire is suitable for pre and post measurement of a participants’ progress in the project. It should however be noted that self-assessment questionnaires are not the only tools for measurement of progress as they may be subjective and sometimes the change measured in this may show a decrease only because the trainings made the beneficiary aware of his/her weaknesses in particular area.

+++ is particularly true; ++ is true; + is less true.

Table 15:Self-assessment sheet to assess own competences

1.Self-Competence +++ ++ +
Independence
I make my own decisions
I form my own opinion and represent it
I take responsibility for my actions
I plan and carry out a work without external help
I call for      outside help – if necessary
I can assert myself
Flexibility
I can adapt to new situations
I can do different tasks side by side
I am open for new or unusual ideas
I can easily switch from one task to another
Creativity
I find solutions for problems
I can help myself
I try out new possibilities
I have imaginative ideas
I can achieve a lot with little means
2. Social Competence
Communication skills
I express myself clearly in spoken and written form
I ask if I do not understand something
I can listen
I do not judge and interpret hastily
Ability to handle conflicts
I can say no
I accept other assessments
I can offer constructive criticism and react appropriately to criticism
I recognize tension and can talk about it
I can deal with my strengths and weaknesses
Ability to work in a team
I accept decisions made
I work out a solution together with others
I can also stand back in a group
I share responsibility for work results
I can take responsibility in the group
3. methodological competence
Learning and working technique
I know where and how I can obtain information
I can concentrate well
I have perseverance
I divide my strengths correctly
Work organisation
I create a work plan and control it
I recognize connections in my work
I foresee consequences and can estimate them
I recognize the essence of a thing
4. professional competence 
Expertise
I know technical terms
I know the rules and norms of my work
I have technical / linguistic knowledge
I have a good general education
Practical knowledge
I implement specialist knowledge
I carry out work properly
I bring in what I have learned
Total

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

12.4.5 Tests used to measure skills development

There are a number of tests that have been already developed and can be utilized to help in the evaluation of entrepreneurial skills of Youth Entrepreneurship Support Actions beneficiaries (start-up founders). Measuring the skills of your participants helps you in planning and optimising your project. 

These psychometric tests were inspired by tools available at the platforms mfa-jobnet.de and Psychometrics (visit www.mfa-jobnet.de and https://www.psychometrics.com) and with necessary modifications to fit this evaluation module, however the original source entails more tests that can help you assess the employability of a candidate. 

Below we provide some tests you can use:

  • Inspire test,
  • Awareness test,     
  • Skills test.     
  • Networking test.

Inspire Test (IT)

Every entrepreneurship project has different requirements for its successful execution, and the IT allows a beneficiary to specify the importance of the personality match for the project that is being rolled out by the organization and the ideal scores that they should receive. These ratings are then used to examine how well a beneficiary’s project fits with the requirements of the whole project activity.      

IT is performed by way of either personal interviews or practical exercises in teams.

Benefits of the IT

The IT provides a comprehensive measure of a participant’s personality, showing how they will:

  • Complete their project,
  • Interact with people,
  • Solve problems,
  • Manage change and,
  • Deal with stress.

Sample Interview questions

ENERGY AND DRIVE 

Energy

  • Tell me about a project you previously worked on that required a lot of energy and commitment. What were your responsibilities? What was the end result? 
  • Name some of the most demanding things you have done. How did you manage them? 
  • Everyone runs out of energy at some point, tell me about a time when you had to work on a task that was too demanding. What did you do? What happened in the end? 

Ambition

  • Tell me about a time when you needed to compete hard to be successful. 
  • Tell me about some difficult goals you set for yourself, and how you reached them. 
  • Describe a situation in which you adopted a non-competitive attitude in order to be successful. 
  • Have you ever had a project with little room for advancement? What was that like for you? 

Leadership

  • What experience have you had with leading people? What was that like for you? What was positive about the experience? What would you do differently? How could you have been a more effective leader? 
  • Tell me about a time when you needed to convince people to follow you. What did you do? Were you able to get people onboard? 
  • Tell me about an occasion when you encountered difficulties with your team. What were the difficulties and how did you overcome them? 
  • Name a time when you took on a leadership role without being asked. 
  • Give me an example of a difficult leadership role you took on. 
  • Tell me about a time when you had to follow someone else’s lead. 

IT process:

Tests will create benchmarks for the project. A beneficiary’s performance will be assessed, and a beneficiary report will be made.

  • Create Benchmarks – observers will work with beneficiaries to create benchmarks for the project activity. One of the most effective ways to identify these requirements is to gather information from people who know the project well. These individuals who are familiar with the project and can speak about the knowledge, skills, and characteristics necessary for someone to emerge successful in the project.
  • Assess participants – use an assessment tool to administer participant assessments.
  • Assess Person – after participants have completed the Inspire module test the observer can generate a report which will indicate how well the participant’s personality traits match with the requirements of the project activity.

The IT report does provide among others; Overall project activity fit score – which is quickly sorted based on the candidates overall fit with project benchmarks, Participant profile that helps identify participants’ specific areas of fit or misfit,     

In-depth narrative description of the participant’s shared working space behaviours, helping to target areas of uncertainty in follow-up interviews and

Profile validity – assesses the extent to which the questionnaire was completed honestly rather than in an overly positive or unusual way.

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

Awareness test (AT)

Youth Entrepreneurship Support Actions have the goal of developing entrepreneurial skills. Therefore, the objective of this test is to provide overview of the different competences and soft skills that are to be developed with the measures to be evaluated and the skills to be tested.

  1. Self-management – Readiness to accept responsibility, flexibility, resilience, self-starting, appropriate assertiveness, time management, readiness to improve their team’s      performance based on feedback and reflective learning.
  2. Team working – Respecting others, cooperating, negotiating, persuading, contributing to discussions, awareness of interdependence with others.
  3. Cultural tolerance / sensitivity – being able, respecting people of other cultures, not discriminating against people of other cultures.
  4. Business and customer awareness – Basic understanding of the key drivers for business success and the importance of providing customer satisfaction and building customer loyalty.
  5. Critical thinking – collecting and analysing information objectively, making a reasoned judgment.
  6. Problem solving – Analysing facts and circumstances to determine the cause of a problem and identifying and selecting appropriate solutions.
  7. Communication – The ability to effectively tailor messages for the purpose and audience and use the best tools available to communicate them.
  8. Time management – Ability to plan and organise one´s time among different activities.
  9. Flexibility – Ability to work within permanently changing environment and desire to look for win-win solutions.
  10. Resilience – Ability to work under pressure. 
  11. Efficiency – Applying the 80/20 rule and other techniques for yielding higher results in less time. Switching between different chores and progressing effectively day-to-day.
  12. Networking – Growing a network facilitates business opportunities, partnership deals, finding subcontractors or future employees. It expands the horizons of PR and conveying the right message on all fronts.
  13. Branding – Building a consistent personal and business brand tailored to the right audience. 
  14. Sales – Being comfortable doing outreach and creating new business opportunities. Finding the right sales channels that convert better and investing heavily in developing them. Building sales funnels and predictable revenue opportunities for growth.

Helping a beneficiary in understanding their personality type is the first step to personal and entrepreneurial growth. The AT helps beneficiaries to understand their strengths, their preferred working styles, and ultimately helps them see their potential. Used individually to provide self-awareness and clarity of purpose, the AT also helps to create a better understanding and appreciation between their teams – enabling them to work better together.

AT can be performed in either personal interviews or practical exercises in their teams.

Benefits of the AT:

  • Greater understanding of oneself and others.
  • Improved communication skills.
  • Ability to understand and reduce conflict.
  • Knowledge of your personal and work style and its strengths and development areas.

 

Interview questions

SOCIAL CONFIDENCE

If the project requires an individual to be self-assured and at ease with people in all types of social situations, consider some of the following questions: 

  • Give me an example of a time when you had to interact with a group of strangers for a work-related function. What did you do? What was the result? 
  • Tell me about a time when you had to exercise social confidence to accomplish a goal. 

 

 

PERSUASION
If the project requires someone who is comfortable with negotiating, selling, influencing and attempting to persuade people or trying to change the point of view of others, consider some of the following questions: 

  • Give me an example of a time when you persuaded someone to purchase something? What did you to do convince them? 
  • Tell me about a time where you changed someone’s mind. What did you do/say that made them see things your way? 
  • Name a time when you used negotiating to get what you wanted. What was the result? 

 

INITIATIVE

If the project requires someone with a high level of initiative to identify new opportunities and take on challenges, consider some of the following questions: 

  • Give me an example of a time when you completed a project without any support from others. What was the outcome? 
  • Tell me about an opportunity you identified that others missed. 
  • Describe some new challenges that you took on without encouragement from others. 
  • Name some new responsibilities you took on voluntarily. 
  • When you identify a potential opportunity what do you need before you will begin working towards it? 
  • What do you prefer, stable project responsibilities or frequently changing responsibilities, and why?
  • Tell me about some occasions where you have shown initiative. 
  • Give me some examples of when you have shown initiative. 

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

 

Soft-skills test (SST)

This test demonstrates the difficulty in developing the competences listed in awareness test and consequently propose specific knowledgebase (lessons) that can do this.

The SST is used to help participants and their team members get acquainted with each other’s conflict styles, identify potential challenges, and set goals for how they should handle conflict as a group. With established teams, the SST helps team members make sense of the different conflict behaviours that have been occurring within the team, identify the team’s challenges in managing conflict, and find constructive ways to handle those challenges.

SST may be performed either as personal interviews or practical exercises in teams.

The SST describes five different conflict modes and places them on two dimensions:

  • Assertiveness – the degree to which a person tries to satisfy their own needs
  • Cooperativeness – the degree to which a person tries to satisfy other people’s needs.

Benefits of the SST

Why do organisations use the SST?

  • It is easy to complete, the short questionnaire takes only a few minutes and can be administered online before your training or in paper booklet onsite.
  • Delivers a pragmatic, situational approach to conflict resolution, change management, leadership development, communication, participant retention, and more.
  • Enables your organisation to open productive dialogue about conflict.
  • Can be used as a stand-alone tool by individuals, in a group learning process, as part of a structured training project workshop.

 

Interview questions

DEPENDABILITY

If you are looking for participants (beneficiaries) with a high level of dependability, consider some of the following questions: 

  • Tell me about a project you couldn’t finish on time. What happened? What would you do differently? 
  • Are you comfortable leaving a project unfinished if something else comes up? 
  • Give me an example of a task that you needed to work beyond your normal hours to complete. What was that experience like? 
  • Can you describe a time when you had to shift priorities and leave a task you were working on unfinished? What happened? How did you complete the first task? 
  • Name a time where it was difficult for you to complete your task. What happened, and how did you resolve the difficulties? 
  • Describe a time when you needed to work extra hard to get your tasks done on schedule. 

 

PERSISTENCE
If the project requires someone with a high level of persistence, consider some of the following questions: 

  • Tell me about a difficult task that you recently completed. What made it difficult? How did you manage to work through the difficulties/obstacles? 
  • Describe a time when you had a large number of boring/dull/uninteresting tasks to complete. How did you motivate yourself? 
  • Give me an example of a project that you gave up on because it was no longer worth the resources to complete. 
  • Tell me about a time when you showed a high level of persistence.
  • Tell me about some obstacles you have overcome that took a lot of persistence. 
  • Give me an example of something you gave up on because you did not think it worth the effort. 

If the project mostly involves tasks that can be completed quickly and has few obstacles to overcome, consider some of the following questions: 

  • Give me an example of a project that you gave up on because it was no longer worth the resources to complete. 
  • Describe a time when you had a lot of boring work to complete. How did you motivate yourself? 
  • Have you ever worked on a project that only required you to do easy work that had no challenges? What was that like for you? Was it pleasant or unpleasant? 

 

ATTENTION TO DETAIL

If the      project involves tasks that require a lot of detailed information or research, consider some of the following questions: 

  • Describe a project you worked on that involved a lot of detailed work. 
  • What is the most detailed work you have had to complete? 
  • What is worse for you, completing a project late, or completing it on time with imperfections? 
  • What kind of tasks have you had to do in the past that required you to pay close attention to details? 

If the project does not involve examining a lot of detailed work, but requires someone who focuses on global problems, best practices or issues, consider some of the following questions: 

  • Tell me about a time when you ignored the details and focused on the big picture. 
  • Tell me about a time when people on your team focused too much on details and missed the big picture. What did you do to help them broaden their focus? Is there such a thing as spending too much time looking at the minor details? 
  • What experience do you have with determining strategy/looking at the big picture/setting large goals and priorities? 

 

RULE-FOLLOWING
If the project has a lot of operational procedures and rules that need to be strictly followed, consider some of the following questions: 

  • Describe your past experiences of working in a very structured, rule bound environment. 
  • Describe your experiences of working in an environment with no structures or rules on how to work. 
  • Can you tell me about an occasion where you needed to ignore rules or procedures to get your work done successfully? 
  • How do you determine when to ignore procedures/rules? Are there situations where you believe they should be followed all the time? How do you determine when that is? 

If the project has few to no procedures and rules, and requires the individual to determine the best way to complete their tasks, consider some of the following questions: 

  • Tell me about some ineffective rules that were still followed in your previous innovation centre or business incubator. 
  • Can you tell me about an occasion where you needed to ignore rules or procedures to get your tasks done successfully? 
  • How do you determine when procedures/rules can be ignored? Are there situations where you believe they should be followed all the time? How do you determine when that is? 
  • How often do you encounter rules/procedures that you think should no longer be in effect? 
  • How comfortable are you working on tasks when you have not been given any direction/instruction? Do you enjoy the freedom? Do you wish you could get feedback to ensure you are doing your tasks correctly? 

 

PLANNING
If the project involves a lot of short and long-term planning, consider some of the following questions: 

  • Tell me about a task you completed that required a significant amount of planning. 
  • Give me an example of a long-term goal or plan that you established. Did you meet your goals? How effective was your plan? 

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

Networking test

The main objective here is to improve Youth Entrepreneurship Support Action’s networking activities and knowledge exchange, enabling the setting up of successful innovation ecosystems across Europe.

Benefits of the NT?

  • It links your interests and preferences to various task activities, project settings and careers. 
  • The NT gives you results that you can benefit from if you are just starting out in your business.

NT is done either by way of personal interviews or practical exercises in teams.

Interview questions

PROBLEM SOLVING STYLE 

INNOVATION

If the project requires someone who is creative and innovative, consider some of the following questions: 

  • Tell me about a problem you solved in an innovative way. 
  • Name an original/creative/new solution you came up with to solve a problem. 
  • When addressing a problem do you first look at what was tasked in the past, or do you come up with an entirely new solution? 
  • What are the benefits/disadvantages of using past solutions? 
  • What have been some of your most creative ideas at task? 
  • When you don’t understand something, do you ask until you understand?
  • Do you often question what is considered normal?
  • Do you like to watch people? Do you often question what is considered normal? 
  • Do you like to watch what is happening in the world as part of innovation? 
  • Do you observe how people behave in situations that may affect you start-up (e.g., where, and how to eat, drink, dress, shop, etc.). Do you observe the world around you? Do ideas for new products and services come to mind?
  • Are you adventurous, happy and looking for new experiences?

If the project does not require much problem solving, or the problems addressed only require incremental changes or practical solutions, consider some of the following questions: 

  • When addressing a problem do you first look at what has tasked in the past, or do you come up with an entirely new solution? 
  • What value do you see in sticking with the established ways of doing a task? 
  • What are the benefits/disadvantages of using past solutions? 
  • What have been some of your most practical solutions to task problems? 
  • Would you describe yourself as innovative or practical? 

 

ANALYTICAL THINKING

If the project requires analysing a large amount of information and a decision-making approach that is logical, cautious, and deliberate, consider some of the following questions: 

  • How much information do you need to feel comfortable making a decision? How do you get that information? 
  • Tell me about a decision you have made through extensive information gathering and discussion with others. How did it work out? Did you need to be as cautious as you were, or could you have made the decision more quickly? 
  • Would your friends describe you as analytical and calculating or intuitive and spontaneous? Why? 
  • What process do you go through before you make a decision? 
  • Tell me about an important decision you needed to make quickly.

If the project requires quick decision making that does not allow for the extensive gathering of information, consider some of the following questions: 

  • How much information do you need to feel comfortable making a decision? How do you get that information? 
  • Tell me about a decision you have made based on your gut feelings. How did it task out? How comfortable are you making decisions that way? 
  • Would your friends describe you as analytical and calculating or intuitive and spontaneous? Why? 
  • When have you had to rely upon your intuition to make a decision? 

 

DEALING WITH PRESSURE AND STRESS 

SELF-CONTROL

If the project requires the individual (beneficiary) to have a high level of self-control, consider some of the following questions: 

  • What do you do when you get frustrated with others? Tell me about a time when you were frustrated with a team member. 
  • Give me an example of when you maintained your composure in a difficult situation. 
  • Give me an example of a time when you were angry with someone at task. What did you do? 
  • Describe a previous task experience that had you frequently dealing with upset people. 
  • What experience have you had dealing with irate customers? 
  • Tell me about a time when you had to deal with an upset customer. What did you do? What were the results? How did you feel afterwards? 

 

STRESS TOLERANCE

If the project requires regular results in a high level of stress, consider some of the following questions: 

  • What do you do to alleviate stress? 
  • How do you tolerate stress? 
  • Name a time when you had to do a task under extreme stress. 
  • What types of activities do you find stressful? 
  • Name a time when you had difficulty coping with stressful tasks. What did you do to get yourself through that time? 
  • What type of stress do you find very hard to deal with? 
  • Are there stressful activities that you cannot cope with? What are they? 
  • What do you find to be the most stressful? 
  • What kinds of extreme stress have you had to execute tasks under? 
  • What have been some of the most stressful things you have been involved in? 

Source: adapted from Stellenbörse für MFA und Arbeitgeber (n.d.) and Psychometrics Canada Ltd. (n.d.).

 

12.4.6 Business Model Canvas

The Business Model Canvas is a good tool for assessing a start-up at the beginning and end of the project – to measure business development.     

Figure 9: Business model canvas

pastedGraphic.png

Source: Strategyzer, n.d.

 

Summary of steps on how you could use the business plan format in assessment of Entrepreneurship Project Actions:

Development of a business/ project idea from the point of view of a start-up

  • Which idea/ plan is working best for me?
  • What makes it different from others?

Requirements of the founder

  • What are my qualifications, strengths, and weaknesses?
  • Which partners do I have, or do I need?

Overview over the market

  • What is my target group and what do these people need?
  • What do my competitors offer?

 

Good strategies for marketing and distribution

  • How could good advertisement look like?
  • What is my distribution area and roads?
  • Do I have a good price strategy?

Set up of the organisation /company

  • How many employees do I need and what should their skills/ qualifications be?
  • How does the legal form of my company look like?

Analysis and weighing up of the possible chances and risks

Financing

  • How much are my potential customers willing to pay?
  • How high are my capital requirements?
  • Do I have a financing- and investment plan?
  • Are there alternative funding opportunities? Grants, donations.
  • How high are my costs of living?

Acquisition of further documents

  • Do I need other documents, for example expert opinions, contracts or curriculum vitaes?
  • Legal requirements?

 

Want to try it on your own? Fill the Business Model Canvas here below.

Figure 10: Template of a Business model canvas

pastedGraphic_1.png

Source: Strategyzer, n.d.

 

12.5 Tools to use in Analysing Information

Below are some tools you could use in analysing the information that you have gathered above.

 

12.5.1 Assessing the outcomes (‘the what’) of your project

The questions below are provided simply as guidance in assessing the outcomes of your project. 

You may have additional or different questions you want to ask depending on the nature of your project and your reason for evaluating. 

  • In what ways have participants benefited from being in your project? (e.g., re-engaged with learning, gained accreditation, increased or improved industry specific networks, identified a realistic vocational pathway, gained entrepreneurship skills, gained identity documents etc.) 
  • How do you know this? (That is, what information or evidence do you have to show that the participants have benefited?) 
  • What other benefits have you delivered (and to whom)? 
  • Have there been any surprising or unanticipated outcomes?
  • How do you know if it is your project that is making the difference? (i.e. there might be other things that are going on in the organisation at the same time as your project which might also impact on participants’ outcomes or things you are trying to measure).
12.5.2 Assessing the effectiveness (‘the how’) of your project

The checklist below is provided as a simple way of assessing the effectiveness of the project (‘the how’) and to identify potential areas for improvement. 

You may have additional or different questions you want to ask depending on the nature of your project and your reason for evaluating. 

Table 16: Checklist for assessment of the effectiveness of a project

Key Question Y/N Examples of Follow Up (To address identified issues)
Are all stakeholders wholly engaged in the      project? No Arrange a meeting of all stakeholders to review current levels of involvement and see if this needs to change and in what ways. For example, are more resources needed? Would formalizing the project in a memorandum of understanding make a difference? If there is a genuine lack of engagement from one partner and this is unlikely to change, should the support of another organisation be sought?
Is there a shared vision and common goals? Create a kind of glossary in joint work – in which the goals are defined, and a common interpretation is fixed.
Does each stakeholder have a clearly defined role and responsibilities? Create a kind of organigram in which each party is represented with its roles & responsibilities – this should be accessible to all.
Are the expectations of each stakeholder fair and reasonable?
Does each stakeholder have a good understanding of requirements of other stakeholders?
Is there regular communication between stakeholders?
Have all parties received fair recognition for their efforts?
On balance, have the benefits of the collaboration justified the time and effort that have gone into it?

It can help to break down an overarching or key evaluation question into a small number of sub questions to guide the information gathering and analysis. The table below shows how one project Entrepreneurship training for unemployed graduates went.

 

Table 17: Checklist to assess an Entrepreneurship training for unemployed graduates

Name of Project: Entrepreneurship training for unemployed graduates.

Objective Focus of the evaluation Evaluation question Questions included in evaluation tools (asked to interviewees/ respondents)
To help participants turn their ideas into operational and profitable businesses. Focus on the impact of the activities or project (the ‘what’). How, and to what extent, has the project helped graduates to turn their ideas into operational and profitable businesses?

» What knowledge and skills have been gained by participants through this project?

» How have these skills helped participants in their quest to turn ideas into fully fledged businesses?

» Are there other things that could be done to support participants in their quest?

» What additional information is needed to make a decision about the future of the      project.

To identify how well the stakeholders are working together and if improvements could be made. Focus on the effectiveness of the      projects (the ‘how’). How effective are our current      projects?

» What are we doing well?

» What are the things that are not working so well?

» How can we improve these      projects?

 

12.5.3 Quantitative analysis techniques

Figure 11: Overview if quantitative analysis techniques