ICT in Education Toolkit Version 2.0a
September 2006
Manage Users |   How to Use the Toolkit   Concept and Blue Print   Logout
  Welcome , NIUE
Toolkit Map Filing Cabinet Team Bookshelf Search Handbook Toolkit Library My Messages
Tool 6.1: Evaluation of ICT Interventions
  OVERVIEW
1 Classes of Evaluation
Class 1: Degree of Implementation
Class 2: Degree of Proper Use
Class 3: Degree of User Satisfaction
Class 4: Degree of Effectiveness
Class 5: Degree of Subsequent Application
Class 6: Degree of National Effect
2 Designs of Evaluation
3 Modes of Measurement of Evaluation
4 Management and Oversight of Evaluation
Reference Information
  Print This Tool

Toolbox 6:
Assessment and Subsequent Actions
6.1 Evaluation of ICT Interventions
6.2 Adjustment & Scaling Up
View Toolkit Map
View ICT for Education Handbook
 
 
  4. Management and Oversight of Evaluation
 


While policymakers and planners do not need to know the details of how to conduct ICT-Intervention evaluations, they may need to know the following:

  • How to identify a competent coordinator of evaluation,
  • Time needed to conduct various classes of evaluation,
  • How to estimate the costs of various evaluations,
  • Key ethical considerations involved in evaluations,
  • Oversight they might usefully exercise over the evaluations.

Selecting an Evaluation Director

The coordinator of any substantial evaluation requires both strong technical skills in evaluation and strong administrative skills. Usually the technical skills will require a master's degree-if not a doctoral degree-in the social sciences, such as psychology, sociology, public policy, education research, or program evaluation. Economists usually have little training in data collection procedures but advanced data analysis techniques. The technical skills developed during graduate training should have been further developed by progressively more responsible experience working on evaluations. The administrative skills seldom are taught in the above cited graduate programs, but may be acquired from other training in business administration or public administration, or from practical experience. The skill level needed by the evaluation coordinator becomes progressively higher with the ascending classes of evaluation described above. The following are characteristics that should be considered when appointing or selecting a coordinator of the evaluation:

  • Candidate's undergraduate and graduate degrees and grades.
  • Level of responsibilities the candidate held when previously doing evaluation work, and the quality of his or her performance,.
  • Other evaluation experts' judgments of the quality of prior evaluations directed or worked on by the candidate
  • Candidate's track record of completing evaluation work on time and within budge
  • Probability that the candidate will remain with the evaluation(s) from the planning phase through to completion

Time Needed for the Evaluation

  • Class 1, 2, and 3 evaluations usually can be completed in 1-6 months, providing that they do not require development of new psychometric affective instruments, providing a small evaluation team will not have to travel to many distant communities and schools, and providing the needed data analysis is fairly straight-forward.
  • Class 4 evaluations usually require several months of advanced planning, then administration of "pre-measures" just before participants begin the ICT-Intervention and then "post-measures" as they complete it, with simultaneous administration in the control group. After that, the evaluators will usually need several months to check the data, analyze it, prepare a report draft, and revise it following review and feedback by others. Thus, if the intervention will take place for just one school year, a Class 4 evaluation will take about two years to complete. The time will be longer if sophisticated data collection instruments have to be developed, a small evaluation team will have to travel widely to collect the data, or the data analysis is complex. Sometimes an evaluation can be hastened by contracting out sophisticated instrument development and by hiring temporary staff to assist with extensive data collection. If the intervention extends for two or more school years, the evaluation team will have some down-time between administration of the pre-measures and administration of the post-measures, unless evaluation plan calls for interim measures at the end of each year or calls for following several cohorts of learners beginning the intervention each successive year.
  • Class 5 evaluations are usually follow-ups of Class 4 evaluations initiated several years after the Class 4 evaluation is completed. In that case, they may require only about one year to complete. The time will be longer if sophisticated data collection instruments have to be developed, the participants in the treatment group and control group have dispersed widely, a small evaluation team will have to travel extensively to collect the data, or the data analysis is complex.
  • Class 6 evaluations may take as little as a year or as much as several years.

Budgeting the Evaluation

The following seven factors have the biggest effect in determining the costs of an evaluation.

  1. Class of the evaluation: As briefly indicated above, Class 4, 5, and 6 evaluations are more complex to conduct than the lower class evaluations. Classes 4 and 5 generally use control groups. Class 6 requires intensive case studies, trying to control for the effect of the many other factors that could be boosting or suppressing the development goals during the years over which the ICT-Intervention was expected to be contributing to those goals. These complexities add to the cost of the evaluation.
  2. Number of evaluation questions: Under each class of evaluation described above are listed several potential evaluation questions. In some cases all the questions may be important to the ICT-Intervention stakeholders, but in other cases just a few might be targeted. The cost of the evaluation increases with the number of evaluation questions.
  3. Number, length, and sophistication of instruments developed: Some evaluations may require only administration of currently used national examinations, whereas others may require newly developed achievement tests and psychometric affective instruments. Such differences dramatically affect the costs of an evaluation. Tests that measure academic or occupational skills, particularly high order skills such as analysis, evaluation, and design skills, tend to be the most complex and time-consuming to develop. Similarly, performance assessments of these skills are complex to develop. Psychometric affective measures usually require two or more rounds of development, field-testing, and data analysis.
  4. Expertise needed to administer instruments: Many instruments are simple to administer and can be mailed to schools or community centers where the teachers or facilitators administer them. Others are more complex and require specially trained personnel. This is the case of semi-structured interview guides, focus group scripts, most observational coding forms, and performance assessments. To administer these types of instruments, the staff or temporary employees have to be trained, have to travel to the intervention sites (except interviews might be conducted by phone), and have to receive some supervision and monitoring of their work to assure quality control. In addition, in high stakes evaluations, it may be desirable to send staff or temporary employees into the field even when easily administered instruments are conducted, in order to prevent cheating and fraud.
  5. Number and distance of the data collection sites: Even when mailing out instruments to teachers or facilitators who will administer them, the costs will rise with the number of sites and their distance from the evaluation headquarters. If specially trained staff or temporary employees have to administer the instruments, the costs will rise much more rapidly because of travel expenses.
  6. Number of people from whom data is collected: For a given set of data collection sites, the costs will increase in proportion to the number of people from whom data is collected. When instruments can be administered by the teachers or facilitators, the incremental costs will mostly result from the cost of printing and mailing of additional copies of the instrument. When the instruments have to be administered by a specially trained person either individually (such as with interviews or performance assessments) or in small groups (such as with focus groups), the incremental costs will be high, including a pro-rata share of the person's salary and benefits as well as the hotel and meal expenses.
  7. Complexity of the data analysis: The complexity of the data analysis is partly a function of the Class of the evaluation, the number of evaluation questions addressed, the number of instruments administered, and the number of items on each. But it is also a function of whether the contexts of the intervention implementation are to be assessed, the number of subgroups of learners for whom the results are to be computed and compared, and the types of statistical controls for exogenous forces that might be applied.

The following categories should be considered when budgeting for an evaluation:

Box 6.7 - Evaluation Budget categories

Staffing (salaries and benefits)

  • Evaluation coordinator
  • Regular evaluation staff
  • Temporary data collection and data entry employees
  • Consultants

Office Space and Equipment (rental and purchases)

  • Office spaces
  • Furniture
  • Telephones (phones, installation fees, local service fees, long distance charges)
  • Fax machine (machine, installation fees, monthly fee)
  • Computers
  • Computer software
  • Internet connection (installation fee and monthly fees)
  • Copy machines
  • Scanner for inputting data from instruments

Travel (transit, lodging, and meals)

  • For data collection
  • For reporting to government officials
  • For reporting at professional and scholarly meetings

Data Collection Instruments

  • Purchase of commercially distributed instruments: Needed copies of instruction manuals, booklets, and response sheets
  • Layout and graphics for instruments developed by the evaluation team
  • Copying, collating, and stapling of instruments developed by the evaluation team

Other

  • Miscellaneous office supplies
  • Postage
  • Paper (if instruments are copied in-house)

Reserve for Contingencies (5-10 percent)

Ethical Considerations

Policymakers and planners have a role to play in assuring ethical conduct during an evaluation. They should take measures to assure the following:

  1. Protect evaluators from outside pressures: The policymakers, planners, and developers responsible for an ICT-Intervention often are eager for positive evaluation findings, but incorrect evaluation results will not serve the national interest. Arrangements should be made to protect the evaluators from pressures to bias the evaluation.
  2. Preclude conflicts of interest on the part of the evaluators: The proposed evaluation staff might have conflicts of interest because of family or business ties to the most powerful proponents of the ICT-Intervention or the developers. Proposed staff should complete conflict of interest disclosures and they should be reviewed before final decisions on staffing.
  3. Assure needed competence and resources: If the evaluation coordinator and key staff do not have the needed competence, or do not have needed resources (time, access to schools or community centers, and funding) for the evaluation, the results are likely to be invalid and misleading.
  4. Require protection of human subjects: It should be mandated that the evaluation staff do not use procedures that might pose harm to the educators and learners participating in the evaluation. Assurance of confidentiality and anonymity for teachers' and learners' self-reports about implementation, proper use, and user satisfaction is likely to improve the accuracy of the data. Once those assurances are made, the evaluators should take steps to protect the confidentiality and anonymity of the data. Policymakers and planners should refrain from doing anything to compromise those efforts.
  5. Permit acknowledgement of the evaluations' shortcomings: All evaluations have some shortcomings. Permitting the evaluation staff to acknowledge those in the report is in the best interest of fully informed decisions that might be based on the report.
  6. Arrange for limited outside review of the draft report: Many people will have been involved in any substantial ICT Intervention and they will have different perspectives about the intervention. A draft evaluation report should be reviewed by a few policymakers or planners, a few administrators of schools or community centers where the intervention was implemented, a few teachers using the intervention, and a few outside evaluators. Their suggestions should be given serious consideration, but the evaluation coordinator should make the final decisions of revisions.
  7. Require public dissemination of the final report: Government sponsored ICT Interventions are a public investment, and the public should have access the evaluations (usually Class 4 and above.)

Monitoring of the Evaluation

There should always be some higher level oversight of important ICT-Intervention evaluations. On the other hand, that oversight should not overrule on technical matters. The following are critical decisions and junctures that the oversight might address:

  • Class of the evaluation to be conducted and the evaluation questions
  • Qualifications of the person who is selected to coordinate the evaluation
  • A draft of the evaluation plan
  • A draft of the instruments that are to be used
  • Results of the field-tests of new instruments
  • Whether preparation for the initial data collection is on schedule
  • A draft of the final report, and its release to the public

Next

  Acknowledgements | Terms & Conditions

Copyright © 2006. All rights reserved.

Site designed and maintained by: Sonjara

UNESCO Bangkok JFIT InfoDev (World bank) AED Knowledge Enterprise