8. Preparing your evaluation
Figure 2: Evaluation of Youth Entrepreneurship Support Actions Life Cycle – Phase 1
Before you start preparing an evaluation, you should think about the entrepreneurship support action you want to evaluate and define:
- What does this project try to achieve and why:
- Is it meant to develop certain entrepreneurial skills? Which skills and how many people are to get them?
- Is it focused on developing new businesses? How many businesses are to be established and supported? What level of their development is meant to achieve?
- What are the risks and assumptions:
- What are the skills gaps of the target group?
- What is their motivation and attitude to entrepreneurship?
- What resources are available? What obstacles for the business development are to be overcome?
- And how can you tell whether the project is a success once it’s implemented:
- What are the indicators of achievement of the project goals?
- How will you know these results are due to the evaluated project (e.g., how do you know if the increased number of new businesses is a result of the project?
Planning a good youth entrepreneurship support action can be a tricky task. Designing a good project requires understanding the situation in the region, defining a real problem that can be solved by the evaluated intervention. Other vital requirements are identification of the target groups and their needs, specifying objectives that are attainable within given timeframe and available resources, designing the most appropriate activities. The activities must be designed to serve the production of outputs and outcomes, which lead to the achievement of a defined objectives and a high-level goal.
Furthermore, the project implementers need to see the bigger picture during the project implementation, keeping in mind key questions: what are they trying to achieve, why and how they want to prove?
- Do they plan to develop certain entrepreneurial skills?
- Ensure there will be a certain number of business plans developed?
- The number of new businesses will increase by some %?
- What are the risks and assumptions (e.g., what are the skills gaps of the target group?
- What is their motivation and attitude to entrepreneurship?
- What resources are available to achieve planned outcomes and how you can tell whether the project is a success once it is implemented?
- And finally how will you know the observed effects are caused by the evaluated project?
Understanding the logic of the project is the key for preparing yourself to design an evaluation and provides you with information on the data produced within the project, which can be used in the evaluation. Often the Logical Framework Matrix or Logframe is prepared as part of seeking financial support for the project. Otherwise, you can reconstruct the project logic yourself by asking key questions about seven key areas of the project:
- Purpose – why has this project been initiated? What problem is to be tackled? What change is expected?
- If the results have been achieved, then certain effects may result in the target group. For example, the increase in knowledge can lead to a change in the participants’ behaviour.
- Outcomes – what results are expected? (e.g., increase in knowledge/entrepreneurial skills among participants, establishment of own firms by the project participants)
- Outputs – what are the deliverables?
- Direct results of the activities are described. Following the “if-then” logic, this means that if an activity is carried out, then certain results are expected. Examples are business plans developed within the project; skills certificates issued to the project participants.
- Activities – what actions have been planned to deliver the outputs? (e.g., workshops, trainings, mentoring)
- Indicators of achievement – how will we know if the project has been successful?
- Means of verification – how the reported results can be verified?
- Risks and assumptions – what are assumptions which underlie the structure of the project and what are the risks for achieving?
The template of the Logical Framework Matrix can be found in section 12.3.1.
8.1 Concept of evaluation
Similarly to developing a project, good evaluation requires answering a set of key questions that will help you define the concept of evaluation.
Answering the 5 “W” questions is a good way to start:
WHY are you going to do the evaluation? It is necessary to define the purpose of evaluation as the next parts of the concept depend on this question.
WHAT are you going to evaluate and what resources are needed? The subject and scope of the evaluation has to be clear, otherwise you might end up doing an evaluation for which you do not have the resources, expertise or time. The evaluation questions and evaluation criteria need to be formulated at this stage.
WHO will conduct the evaluation? Is it going to be a self-evaluation (staff implementing the project conduct the evaluation), internal evaluation (evaluation is conducted by own staff not participating in the project implementation) or external evaluation (evaluator from outside). Each type has its pros and cons.
WHEN are you going to do the evaluation? Before the launch of the project (ex-ante evaluation)? During the project implementation (mid-term or ongoing evaluation)? Or after the end of the project (ex-post evaluation)?
HOW will you conduct the evaluation? What sources of information will you use? What methods and tools of gathering necessary data will you use? What tools?
The following sections will guide you through these questions.
8.2 Be clear about the purpose subject, and scope of your evaluation
What is it that you want to find out? What do you want to learn from the evaluation results? How do you plan to use the results?
EXAMPLE
Defining the purpose of evaluation
For more than 25 years, Junior Achievement Slovakia (JA Slovakia) has been helping teachers develop entrepreneurship, economic thinking, and financial literacy of students in Slovakia.
The mission of JA Slovakia is to help teachers to develop entrepreneurship, economic thinking, and financial literacy among students of primary and secondary schools. This is done primarily by the means of experiential learning, in which experienced professionals are involved. JA Slovakia is a member of a worldwide network of 115 JA organizations and a member of a network of 41 JA Europe organizations. For 100 years, this network has been bringing education and skills development in job readiness, financial literacy, and entrepreneurship around the world.
The YEEAs run by the organization
In the programs ‘Applied Economics’ and ‘Entrepreneurship in Tourism’, students have the opportunity to run their first real business in a student company. Skills for employability and creating their own idea are developed in the ‘Skills for Success’ program. The development of ethical aspects of business and the moral values of the individual is the subject of the ‘Ethics in Business; program. Pupils’ financial literacy is increased through the ‘More than money’ and ‘Me and money’ programs, which are created in accordance with the National Financial Literacy Standard. The youngest ones can prepare for their future profession through the ‘Fundamentals of Business’ program.
The scope of evaluation in JA
JA Slovakia has long used the available evaluation options taking into account their time and financial constraints. The main purpose of the evaluation is to assess the growth of the participants´ knowledge by testing their skills at the beginning of the education / school year and at the end. The aim is to compare the results and determine the progress over a period of about 10 months to see if the programme is addressing the needs of the target group or if any improvements are needed.
Source: Bednárová, 2021; Junior Achievement Slovensko, n.o., n.d.
The subject and scope of the evaluation needs to be defined clearly. Are you going to evaluate a specific entrepreneurship support project? Or only a part of it? What exactly do you want to learn?
Do the project goals, activities match the goals and priorities of the organisation?
EXAMPLE
The evaluation will focus on the programme ‘More than money’.
Financial literacy test: Pupils involved in the ‘More than money’ program take a ‘central entrance’ test at the beginning of the school year. The aim of testing is to determine their initial level of knowledge in the field of financial literacy.
At the end of the school year, students take a central exit test. Its aim is to verify the level of knowledge achieved by students after completing the program.
Source: Bednárová, 2021; Junior Achievement Slovensko, n.o., n.d.
8.3 Define the evaluation criteria and questions
The next step is to decide about the evaluation criteria that are linked to the purpose of evaluation. The criteria provide a perspective through which you can look at your project and formulate the evaluation questions.
The criteria describe the desired attributes or aspects of the project which you want to verify/assess: e.g. an intervention should be relevant to the beneficiaries’ needs, coherent with other interventions, effective in achieving planned objectives and outcomes, deliver results in an efficient way, and have positive impacts that last (OECD, n.d.).
You will also need to identify some key questions (evaluation questions) that you want the evaluation to answer. For example, you may want to ask how participants have benefited from being in the project under evaluation, or if it achieved what was expected to do, or if all parties involved in the project participated as planned.
Criterion: RELEVANCE
Evaluation question: IS THE INTERVENTION DOING THE RIGHT THINGS?
The extent to which the intervention objectives and design respond to beneficiaries’ needs
Criterion: EFFECTIVENESS
Evaluation question: HAS THE INTERVENTION ACHIEVED ITS OBJECTIVES?
The extent to which the intervention achieved, or is expected to achieve its outputs, outcomes and objectives.
Criterion: EFFICIENCY
Evaluation question: HOW WELL ARE RESOURCES USED?
The extent to which the intervention delivers, or is likely to deliver, planned outputs and outcomes in an economic and timely way.
Criterion: UTILITY
Evaluation question: HOW USEFUL ARE PROJECT OUTCOMES FOR ITS RECIPIENTS?
The extent to which project outputs and outcomes were useful for their recipients.
Criterion: IMPACT
Evaluation question: WHAT DIFFERENCE DOES THE INTERVENTION MAKE?
The extent to which the intervention has generated significant, positive or negative, intended, or unintended, higher-level effects.
Criterion: SUSTAINABILITY
Evaluation question: WILL THE OUTCOMES OF THE INTERVENTION (ITS BENEFITS) LAST?
The extent to which achieved outcomes will be sustainable in time, after the project completion and financing.
Criterion: COHERENCE
Evaluation question: HOW WELL DOES THE INTERVENTION FIT?
Synergies and interlinkages between the evaluated project and other projects implemented by the same organization or a wider programme within a given project is implemented.
All your evaluation questions need to be related to the scope and objective(s) of the evaluation. Keep it simple and achievable. Below are some examples of possible evaluation questions and related evaluation criteria proposed for an existing project.
EXAMPLE 1.
Project outcome: development skills for success – basic skills for employability and a proactive approach to entrepreneurship development, own work, the ability to solve problems, identify, design and develop their own idea.
Evaluation questions: To what extent have the planned outcomes been achieved (EFFECTIVENESS)? Do the project goals, activities and outcomes match the objectives and priorities of the organization (COHERENCE)? Are the project outcomes useful for its recipients (UTILITY)? Are the project outcomes sustainable (SUSTAINABILITY)? How efficient was the use of the project resources (EFFICIENCY)?
EXAMPLE 2.
Productive post-project pathways:
Objective: To improve youth transition from university to entrepreneurship debut options.
Evaluation question: How, and to what extent, is the project helping the youth in successful transition to entrepreneurship or to work (IMPACT)?
EXAMPLE 3.
Tangible results for the lower qualified participants:
Objective: To equip participants with lower academic qualifications with the necessary skills to start enterprises in their communities by the end of the project.
Evaluation question: How is the project able to harness the entrepreneurship skills of these young people to help turn their ideas into profitable enterprises (UTILITY).
Source: own elaboration
In table you can find examples of evaluation criteria and questions.
Table 4: Evaluation criteria and questions
QUESTION |
EVALUATION CRITERIA |
Was the project budget sufficient to achieve the plan correctly? |
Efficiency |
To what extent have the planned objectives and results been achieved? |
Effectiveness |
How are project resources used? |
Efficiency |
What went wrong and why? |
Effectiveness, Impact |
Do the project goals, activities match the goals and priorities of the organization? |
Coherence |
Are the results obtained permanent? |
Sustainability |
What helped the implementation of the project? |
Efficiency |
What slowed down the implementation of the project? |
Efficiency |
How does the community perceive the project? |
Impact |
to what extent were project results useful for its recipients? |
usefulness |
To what extent was the project relevant to the needs of its recipients? |
relevance |
Source: own elaboration
Having defined the questions your evaluation should answer, you ought to proceed with setting the evaluation indicators that are key for measuring project effectiveness.
8.4 Project indicators
Project indicators measure effects against project goal and objectives. In such context an indicator is used as a benchmark for measuring intended project effects. Indicators can be quantitative (e.g., a number, an index, ratio or percentage) or qualitative (depicting the status of something in more of qualitative terms, e.g., whether a local start-up ecosystem is more developed after the project than it was before the beginning of the project – according to opinions of key informants or judging by the quality of local regulations concerning business activities). Indicators can show if your project has produced the expected outcomes. Why defining indicators is important in the evaluation process (Selecting project indicators, 2013)?
- At the initial phase of a project, indicators are important for the purposes of defining how the success of the intervention will be measured and what level of given indicator or its dynamics should be considered as satisfactory for meeting the respective objective.
- During project implementation, indicators help assess project progress and highlight areas for possible improvement.
- At the final phase, indicators provide the basis for which the project assessment may become dubious.
There are three types of project indicators that are widely acknowledged and can be used when conducting evaluation based on such criteria as effectiveness, efficiency or impact:
- Process indicators: are used to measure project processes or activities. For example, this could be ‘the number of training activities organised in period XY’.
- Outcome Indicators: measure project outcomes. Outcomes are results of a project. For example, it could be ‘how the level of entrepreneurship skills was improved’.
- Impact Indicators: measure the long-term impacts of a project, or simply the project impact, e.g., ‘the number of new start-ups established by youth entrepreneurs’.
Also, other criteria (relevance, utility, coherence) and evaluation questions related to them can have their own indicators, but any appropriate indicator must have particular characteristics (Bureau of Educational and Cultural Affairs, n.d.) that are listed in table 5.
Table 5: Characteristics of indicators
Characteristic |
Description |
Specific: |
Probably the most important characteristic of indicators is that they should be precise or well defined. In other words, indicators must not be ambiguous. Otherwise, different interpretations of indicators by different people implies different results for each. |
Measurable |
An indicator must be measurable. If an indicator cannot be measured, then it should and must not be used as an indicator. |
Achievable / Attainable |
The indicator is achievable if the performance target accurately specifies the amount or level of what is to be measured in order to meet the result/outcome. The indicator should be achievable both as a result of the program and as a measure of realism. The target attached to the indicator should be achievable. |
Relevant |
Validity here implies that the indicator actually measures what it is intended to measure. For example, if you intend to measure impact of a project on development of specific entrepreneurship skills, it must measure exactly that and nothing else. |
Time-bounded |
The system [monitoring and evaluation system and related indicators] allows progress to be tracked in a cost-effective manner at the desired frequency for a set period |
Source: adapted from Selecting project indicators, 2013; Bureau of Educational and Cultural Affairs, n.d.
8.5 Identify the type of information you need
Measuring success may require the collection of information before, during and after your project. You have to identify the types of information you will need earlier on and ensure that they are ready and available when you need them.
There are two broad categories of information to consider:
- Quantitative: this is information that is concerned with counting and measuring things, like attendance, sessions, or scores.
- Qualitative: this information is concerned with people’s feelings, thoughts, perceptions, attitudes, behaviour change and beliefs, and may include things like improved participant attitudes to specific project sessions identified through observation, interviews, and feedback forms.
Below are some ideas for how you might capture, through questions, the different categories of information.
Sample questions for evaluating a project
Knowing what the main purpose of evaluation (see the previous section) is, you can think about the specific questions to be used in the tools to collect qualitative and quantitative information (e.g. tests, questionnaires, etc.)
Questions you can use in tools gathering quantitative data
- How many trainings focusing on the development of entrepreneurial skills were organized?
- How many staff from the organisation and business community were involved in the mentoring program?
- What was the increase in the number of participants that occurred after involving the local business community in the mentoring project?
- What was the increase in number of participants choosing sessions from the project because of the mentoring program?
- How many participants have gained positive outcomes from participating in the activities?
- How many participants were there at the beginning of the program and how many participants completed the program?
Questions you can use in tools gathering qualitative data
- What were your feelings about having a mentor before you joined the project?
- How do you feel about the experience now that you have been mentored for a year?
- What (if any) parts of the experience did you find most enjoyable?
- What (if any) parts of the experience did you find challenging?
- What advice would you give other participants coming into the mentoring program next year?
- Why did some participants terminate the program?
- Please think about the period after you started participation in the project. Have you observed any significant change in the way you live your life in this period?
- (If yes) Please mention the areas in which you observed these changes. Please say what was the main factor that caused each of these changes?
- Please think about the period after you started participation in the project. Have you observed any significant change in the way your start-up was functioning in this period?
- (If yes) Please mention the areas in which you observed these changes. Please say what the main factor was, that caused these changes.
- (If the project was mentioned as main factor) Which specific workshops moved your start-up the most?
- Is there anything in the project you think should be done another way (concerning the organization, lecturers, times, etc.)? What would you like to be different?
- What should be different (if anything) so that you can make more use of contact with other participants, to benefit from being part of the community?
8.6 Be clear about your stakeholders
In any youth entrepreneurship support action, there are multiple stakeholders. Stakeholders can vary from project representatives of project partners including business incubators, accelerators, universities local or regional administrative authorities, employers’ organisations, donors, mentors, trainers, coaches, project staff and participants.
You will need to think about the audience for your evaluation, who your stakeholders are, if and how some of them can be involved in the evaluation process, what information they can provide you with or what evaluation criteria and questions would be important for them.
If the evaluation conclusions are to be implemented than it is vital to identify the stakeholders´ evaluation needs in the preparation phase. Otherwise, you might not be able to satisfy them, as the results of evaluation will not provide information expected by the stakeholders.
If possible, integrate stakeholders into the evaluation. This allows a comprehensive insight and change of perspective. Especially the involvement of the target group (beneficiaries) is an important issue, because you are going to be in touch with them anyway, so they are already there. Participative evaluation is a simple tool to ensure integrating stakeholders into the evaluation.
8.7 Identify potential sources of help
Before gathering your information, think about the kind of help you may need and when you may need it.
Potential sources of help
- An independent person, such as someone from another organisation carrying out similar projects, stakeholder, or business, could help during the preparation and planning part of your evaluation.
- External experts could assist with the evaluation concept, design of surveys and data analysis, as well as methodological supervision of your research tools (interview scenarios and questionnaires).
- Some information gathering might best be done independently by a third party (e.g., collecting and analysing some data by an external expert, statistics collected by public authorities, etc.).
Depending on the nature of the project, its stakeholders could participate in the designing of the evaluation concept, support data collection, help in interpretation of the results and formulating recommendations bringing about their experience and expertise in the project.
8.8 Identify the sources of the information you want to gather
To get a good perspective about the participants in the project, you could think about using internal information such as:
- Attendance records
- Retention rates
- Post-project tracking data (e.g., data on developments of businesses launched by the participants, data on project beneficiaries employment status, participation in mentorships etc.)
- Participant portfolios on participants’ businesses developments launched during the project (e.g., technology, social enterprise, business, novel ideas)
- Participant behaviour records (e.g., time-out, commitment, retention)
- Participant’s, parent’s, professional assistant’s or instructor’s opinions on particular aspects of the project, including their needs, satisfaction
- Participant achievement data (e.g., pre-tests and post-test results).
From perspective of youth entrepreneurship support action implementer, you could think about:
- Interviewing corporate volunteers
- Records of financial and in-kind support
- Level of media coverage or reach of the communication and dissemination efforts
- Asking participant about their satisfaction (e.g., using a survey and interviews)
- Numbers of participants directly and indirectly impacted by the activities or projects
- Sales figures or other evidence of marketing success
- Interviewing the project staff.
To ensure objectivity of the evaluation results, you should also consider information that could be available from external sources. These can include statistics, surveys, or analysis developed by:
- labour offices that might keep statistics of the number of graduates registered as unemployed persons in the monitored period;
- municipalities that can monitor the entrepreneurship activity in the respective region;
- secondary, tertiary and other educational institutions, which may provide statistics collected on their graduates / alumni clubs;
- government level / ministry of labour, social affairs and similar, that may systematically approach the issue of NEET employment and entrepreneurship;
- non-governmental organisations that deal with youth (e.g., NGOs collaborating with universities, such as AIESEC, IASTE, ELSA),
- or networking spaces, such as community centres, leisure centres, co-working spaces, etc.
Make the most of existing information:
The history of the project could be traced through such documents as:
- Planning documents, especially Project Logic or Logframe, grant application documents etc.
- Communications – emails, records of phone conversations between partners
- Original timelines and budgets
- Business or strategic plans
- Minutes of meetings
- Evidence of community consultation
- Memos, and
- Financial records.
Other information could be gathered by interviewing people who have been involved since the early days of the project.
You could find out what people remember about the beginning of the project:
- their initial expectations and motivations to join the project, changes in expectations in the course of the project and to what extent the expectations were met
- early roles and responsibilities
- expected and actual challenges and
- proposed ways of addressing these.
In this way you can build up a picture of how the project has evolved and if it is still serving its originally intended purpose.
There are many different methods of gathering data from different sources of information, each with its own advantages and disadvantages, such as desk research, interviews, case studies, observations, and surveys. See the section 9, ‘Gathering Information’, for more ideas on how to collect information from a variety of sources.
8.9 Design of impact evaluation
Impact evaluation is the type of evaluation that focuses on factors which caused the observed change in target group of the evaluated project. Using a combination of the following strategies can support the conclusions drawn (Peersman, 2015):
- estimating what would have happened in the absence of the evaluated project, compared to the observed situation,
- checking the consistency of evidence for the causal relationships described in the project logic framework,
- ruling out alternative explanations, through a logical, evidence-based process.
There are three designs that allow for implementing these strategies. Experimental designs and quasi experimental designs are based on the principle of comparing the situation before and after an intervention in two groups – a treatment group consisting of participants benefiting directly from the evaluated project and the other group, which includes individuals of similar characteristics who were not supported by this intervention/project.
- Experimental designs – in which ‘the other group’ is called a control group and the assignment to this group as well as to the treatment group is based on random mechanism. Due to these features this design is often called randomized controlled trials (RCTs).
The main precondition for an RCT is that the number of individuals interested in your project is greater than the number participants you can provide support to. The treatment group and the control group should be similar in terms of their features as age, education level, employment status, etc. Randomized selection can be conducted in various ways, e.g., computer generated assignments, or a lottery. The main principle is that all individuals have an equal chance to be selected to both of the groups.
- Quasi-experimental designs – in which ‘the other group’ is called a comparison group and is constructed by using various techniques to secure optimal similarity or controlled difference to the treatment group. A selection to both groups is based on non-random mechanism (e.g. the compared groups contain only the people who were close to the project admission border, and they are selected from the project recipients and from candidates who were not included in the project).
- Non-experimental designs – which look systematically at whether the evidence is consistent with what would be expected if the intervention was producing the planned impacts (e.g., sequence and timing of project activities and effects go as assumed by project’s logic), and also if non-project factors could provide an alternative explanation to the observed effects.