|
Class 1 Evaluation: Degree of Implementation |
|
|
This class of evaluation aims to determine the extent to which the ICT Intervention is being implemented as intended.
Education innovations are often not fully implemented as intended. This could be due to the planners' having unreasonable expectations, infrastructure not performing as represented, contentware developers not producing as promised, funding not being adequately disbursed, educators not being satisfactorily trained in using the ICT intervention, or unexpected external factors interfering with implementation.
Early detection of incomplete implementation permits corrective actions. Large disparities may call for new management, review of implementation plans, and consideration of whether the project is so far off course that it should reset. In addition, if implementation is far from desirable, it often is wasteful to conduct higher-level Classes of evaluation, because poorly implemented innovations rarely prove to be notably effective.
Implementation can begin to be evaluated as the ICT intervention elements are in place, assessing whether they meet the planned specifications. Evaluation throughout the implementation rollout in schools or the community can provide early warnings of shortcoming and maximum opportunity to correct them. Evaluation after several years can indicate whether there has been decay in the implementation after the initial rollout.
The evaluation of implementation can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3). A list of important questions that may be addressed appears below (Box 6.1). The Evaluation Team may select from them as appropriate and add their own.
Box 6. 1 - Questions to Determine Degree of Implementation
- To what extent is the planned regulatory framework in place and adhered to?
- To what extent is the planned infrastructure established and fully operational?
- To what extent is the planned hardware installed and fully operational?
- To what extent are the necessary changes in school administration adopted?
- To what extent are the planned broadcast tapes, instructional software, web-based materials, and other media completed to the original specifications?
- To what extent did the ICT technologies perform as planned (access speed, down time, etc)?
- To what extent are other planned supports established and fully functional?
- To what extent have the personnel involved in the Project been oriented?
- To what extent are the teachers or facilitators trained and performing as planned?
- To what extent are the teachers or facilitators proficient with the ICT technology?
- To what extent are the teachers or facilitators proficient with the planned pedagogy?
- To what extent are the teachers or facilitators integrating the ICT technology and planned pedagogy into their teaching and guidance?
- To what extent were users prepared as intended to use the ICT Intervention?
- To what extent has implementation cost been more or less than expected?
- To what extent have funds been provided as necessary?
- What are the reasons for whatever large failures in implementation have been found?
- What contributed to them?
- Were they due to opposition to the intervention, insufficient guidance, deficient skills, inadequate incentives, or other factors?
- Which implementation failures are important to correct and which are of minor consequences or even functional?
- What are the best ways to correct the important implementation failures?
|
|
|
Class 2 Evaluation: Degree of Proper Use |
|
|
This class of evaluation aims to determine the extent to which the ICT-Intervention is being used as intended.
A reference point for this evaluation is the section in the ICT Policy Program Decision Document in Tool 2.2 dealing with "How the technology will be used to advance educational objectives." These include Learning and Instructional objectives set for the Project and its different components in terms of the following taxonomies:
|
Learning Objectives Menu
|
Description
|
1
|
Allow the storage or display Information
|
This level involves the passive hearing or viewing of stored information, individually or as a group.
|
2
|
Foster exploration of materials and ideas
|
At this level the learner is engaged in the conscious pursuit of information that will lead to a better understanding of an existent issue, question or concept.
|
3
|
Enable the application of understanding
|
At this level, ICTs can provide a powerful tool for applying a concept or understanding to a new situation.
|
4
|
Organize materials or ideas to foster analysis
|
Here ICT tools allow individuals to analyze materials or ideas by organizing them and manipulating them as a means of understanding their relationship.
|
5
|
Support evaluation and problem-solving
|
This level represents the use of ICTs to support the learners' process of evaluation. This can be done by compiling information and resources into a digital repository, developing simulations that will immerse students in an environment that will help them evaluate relevant dimensions and solve the problems that are posed, and collaborative Web-based environments that support or foster evaluation and problem-solving.
|
6
|
Facilitate constructing or designing projects
|
At the highest level ICTs are used to foster the design or construction of integrating projects, whereby students must explore wide range of ideas and resources, analyze and evaluate them, and synthesize them in a project. ICTs can fully utilize the multimedia environment to support this process.
|
Teaching Objectives Menu
|
Additional Description
|
Presentation
|
of a piece of information
|
Demonstration
|
of a concept, idea, phenomenon, law or theory
|
Drill & Practice |
to achieve student competence in the application of knowledge
|
Animation and simulation |
to abstract reality and offer an efficient and inexpensive environment to reach generalizations or to draw implications from a law or theory
|
Research |
for professional development and preparation of lessons
|
Collaboration/ communication |
on projects with other teachers in the school or in other schools in the country or elsewhere, or with scientists in the field.
|
Management of Student Learning |
|
Usage Modality Menu
|
Description
|
Integrated into Curriculum
|
Used as an integral part of the teaching/learning process
|
Enrichment
|
Used as a resource outside regular classroom
|
Self-standing |
Used for distance education, virtual schooling, online courses, etc…
|
ICT Interventions can be implemented well but still not be used as intended by users (teachers, facilitators, learners, administrators, etc.). They might not be used at all, used considerably less than intended, or used frequently for unintended purposes. That may be because the users
• misunderstood the guidance or instructions given,
• cannot make the intervention work as they were directed,
• are bored by the intended use and found alternative uses,
• are not convinced in the value of the ICt intervention, or
• were satisfied with the intended use which led them to find supplemental uses. For instance, a telecenter intended to facilitate communication and develop literacy skills might be used for the marketing of local crafts or e-mailed fraud schemes.
Early detection of low usage and mistaken usage might allow modifications that save the ICT Intervention from failure. Early identification of unintended positive uses may allow modifications that enhance those uses. Quick responses against potential abuses may avert adverse publicity and prevent closing of the intervention. In addition, if an ICT Intervention is not being used much, it is wasteful to conduct higher class evaluations, and if it is being used largely for unintended purposes, that would be important to know when planning the higher class evaluations.
Use of the ICT Intervention should be evaluated as different project components are put in place, including the contentware. Further evaluation throughout the phase-in of implementation allows early alerts to shortcomings and unforeseen opportunities. Class 2 evaluation is sometimes repeated several years after an intervention has been fully implemented to examine whether usage has evolved over time as a result of experience, training, comfort with ICT, new types of users, changing social contexts, or access to new types of resources.
The evaluation of the degree to which the ICT Intervention was used as intended can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3). A list of important questions that may be addressed appears below (Box 6.2). The Evaluation Team may select from them as appropriate and add their own.
Box 6.2 - Questions to Determine Degree of Proper Use
- To what extent is the ICT intervention used in the intended modality:
- Integrated into the curriculum
- Enrichment
- Self-standing
- What portion of class or school time are learners using the ICT-Intervention?
- For how many hours per week are the learners using it outside of class or school?
- To what extent are the hardware, software and media used for the intended learning purposes?
- To what extent are the hardware, software and media used for the intended instructional purposes?
- To what extent are the hardware, software and media used for the intended communication and linkage purposes?
- To what extent are the learners using the hardware, media, and software as intended?
- Using them to: Memorize information, retrieve and store information, exploration, application, evaluation, and constructing or designing?
- Using them to enhance communication skills: reading, writing, listening, and speaking?
- Using them to develop technology skills?
- In what unintended ways and to what extent are they using the hardware and software? Why?
- To what extent are the learners using the teachers or facilitators as intended?
- In what unintended ways and to what extent are they using the teachers or facilitators? Why?
- To what extent are the learners using the other supports as intended?
- In what unintended ways and to what extent are they using the supports? Why?
- To what extent and in what ways are the learners perhaps abusing the intervention resources, such as by: stealing the hardware, damaging the hardware, erasing media, using the hardware and software for non-educational purposes?
- To what extent do the answers to the above Class 2 questions vary by geographic region, by socio-economic characteristics of the schools and communities, by gender, and by other characteristics that might influence use?
|
|
|
Class 3 Evaluation: Degree of User Satisfaction |
|
|
This Class of evaluation aims to determine the extent to which the ICT-Intervention is pleasing or disappointing to users.
The subjective reactions to the ICT-Intervention of administrators, teachers, learners and other users are important indicators of their motivation to use the system and their likely persistence in using it. If many users are generally displeased with the intervention, the chances of it achieving the planned objectives are slim. Dissatisfactions may be caused by poor implementation or incorrect use of the ICT-Intervention but may also be inherent in the intervention itself.
Early assessment of dissatisfactions can help identify implementation failures that can be corrected, confusing guidelines or instructions that can be clarified, or aspects of the intervention that might benefit from improvement. Those aspects might pervade most of the ICT Intervention or may be limited to a few components.
Evaluation of users' degree of satisfaction with the ICT-Intervention may begin soon after implementation starts or it may be delayed for a year or so to allow for the intervention to be implemented and used as intended. Satisfaction is sometimes again evaluated after several years to see if it has changed as a result of experience, training, comfort with ICT, new types of learners or changing social contexts.
The evaluation of the degree of user satisfaction can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3 below). A list of important questions that may be addressed appears below (Box 6.3). The Evaluation Team may select from them as appropriate and add their own.
Box 6.3 - Questions to Determine Degree of User satisfaction
- To what extent is the intervention convenient to use?
- How easy is it to use?
- How trouble-free is it to use?
- When troubles are encountered and how quickly are they solved?
- To what extent is the intervention interesting and enjoyable?
- [Teachers only] To what extent does the intervention reduce or increase the time spent on preparation, classroom management, discipline, and grading assignments?
- To what extent does the intervention appear to boost or reduce attitudes and skills in respect to the following learning objectives: Memorize information, retrieve and store information, exploration, application, analysis, evaluation, and constructing or designing?
- To what extent does the intervention appear to boost or reduce attitudes and skills in respect to communication skills: reading, writing, listening, and speaking?
- To what extent does the intervention appear to boost or reduce attitudes and skills in respect to facilitate the learning of technology skills?
- To what extent does the intervention appear to boost or reduce learners' eagerness to attend school and their satisfaction with school?
- To what extent does the intervention appear to boost or reduce learning beyond what is required by school?
- To what extent do the answers to the above Class 3 questions vary by geographic region, by socio-economic characteristics of the learners, by gender, and by other characteristics that might influence satisfaction?
|
Note that Class 3 evaluations only assess teachers' and students' subjective assessment of the ICT's effect on student learning.
|
|
Class 4 Evaluation: Degree of Effectiveness |
|
|
This Class of evaluation aims to determine the extent to which the ICT-Intervention is effectively fulfilling the educational objectives set for it. The reference point here is the set of educational objectives explicitly stated in ICT Policy Program Decision Document in Tool 2.2. They are expressed in one or more of the following:
- Expanding educational opportunities
- Increasing efficiency
- Enhancing quality of learning
- Enriching quality of teaching
- Facilitating skill formation
- Establishing and sustaining lifelong learning
- Improving policy planning and management
- Advancing community linkages (including Community centers)
Details of these objectives are described in ICT for Education: Analytical Review, Section 5.
A major subset of these objectives is developing intended knowledge, attitudes, and skills in the learners. Here a learner may be a school student, a worker, an adult lifelong learner, a teacher - if the ICT intervention is to improve teaching, an administrator -if the ICT intervention includes improvement of policy planning and management, or a member of a community in a community learning center.
Perfect implementation, widespread usage as intended, and high satisfaction on the part of users, do not assure that the ICT Intervention has been effective in fostering the intended educational objectives. Judging effectiveness requires a Class 4 evaluation, which is often referred to as "impact evaluations," "output evaluations," "outcome evaluations," or "summative evaluations." Class 4 evaluations address the first bottom-line: effectiveness. It examines whether the ICT-Intervention effectively met targeted educational objectives, including enhancing the quality of participants' learning. It may also broadly address whether the participants learned more of other things that had not been targeted. Finally, it may examine whether the added costs from the ICT-Intervention are justified by the extent and nature of fulfillment of these objectives.
While the first three classes of evaluation are directed at helping the ICT-Intervention developers and practitioners--as a means of refining the implementation of the intervention, the Class 4 evaluation is usually of more interest to policymakers and planners. The results of Class 4 evaluations can be used to help decide whether to
- modify the ICT intervention in hopes of making it more effective;
- expand the intervention to other geographic areas, grade levels, subjects, or target groups; or
- abandon the intervention as unsatisfactory.
The public, and even policymakers, often think that proof of effectiveness only requires objective measures showing that the targeted educational objectives have been achieved, and conversely, proof of ineffectiveness only requires objective measures showing the objectives have not been achieved. That is mistaken for several reasons. The central focus of Class 4 evaluations is to examine whether the ICT-Intervention caused gains or losses in respect to the educational objectives. The evaluators must know how beneficiaries not exposed to the intervention perform in respect to those objectives if they are to have a basis for determining whether the intervention caused gains (or losses). For instance, learners may make progress on the learning objectives from their normal school instruction, from out-of-school learning, and sometimes even from natural maturation. In addition, some shortcomings in evaluation procedures may upwardly or downwardly bias the results, such as administering a pre-test that then inadvertently prepares learners to do better on the post-test because they have been "sensitized" to the focus and procedures of the test. On the other hand, evaluation procedures themselves can sometimes be disruptive and adversely affect learning.
Consequently, Class 4 evaluations usually use both a "treatment group" and "control group," composed or selected to be as similar as possible, and then compare the learning outcomes in the two groups. Each group might be composed of multiple schools, many classes, and hundreds of learners. Class 4 evaluations also usually take "pre-measures" of the learning objectives administered before the learners begin participating in the intervention and then take "post-measures" after the learners have completed specified parts or all of the intervention, with identical measures administered at the same points of time to the control group. A "pre-measure" is taken just before learners begin an ICT-Intervention that is being evaluated. It indicates baseline knowledge, attitudes, and skills. A "post-measure" is taken just as learners complete the intervention. The best assessment of an intervention's effects is to compare the difference in the post-measures and pre-measures of the intervention group with the difference in the post-measures and pre-measures of a comparable control group subject that has been subject to measurement at the same times as the intervention group.
These arrangements require that Class 4 evaluations be planned well before the ICT intervention begins or at least before the studied treatment beneficiaries begin participating in the intervention. Because educational interventions are often not well implemented in their first year and have some early operational problems, it is usually desirable to plan on a two-year period of Class 1-3 evaluations and program refinement, before starting a Class 4 evaluation. The program should be operating stably and as expected before this type of evaluation is started. The Class 4 evaluation, however, always should start with a treatment group of beneficiaries who have not yet been exposed to the intervention and simultaneously with a control group. Then it must follow both groups until the treatment group completes participation in the intervention. In addition, it may revisit both groups a few years later to determine whether any initial gains from the intervention are retained by participants after completing the intervention.
The evaluation of the degree of effectiveness can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3 below). A list of important questions that may be addressed appears below (Box 6.4). The Evaluation Team may select from them as appropriate, depending on the stated educational objectives of the ICT Project, and add their own. Evaluation of effectiveness in respect to knowledge, attitudes, and skills may also include questions about the learners' characteristics that might also affect their performance.
Box 6.4 - Questions to Determine Degree of Effectiveness
- To what extent has the ICT intervention extended educational opportunities to groups that were not well served?
- To what extent has the ICT intervention increased efficiency of educational offerings in different geographic areas of the project?
- To what extent did the learners using the ICT-Intervention gain or lose more than they would have otherwise in each of the subject-areas enhanced by the intervention?
- To what extent did the learners using the ICT-Intervention gain more than they would have otherwise in: Memorization of information, retrieval and storage of information, exploration, application, analysis, evaluation, and constructing or designing?
- To what extent did the learners using the ICT-Intervention gain more than they would have otherwise in communication skills: reading, writing, listening, and speaking?
- To what extent did the learners using the ICT-Intervention gain more than they would have otherwise in technology utilization?
- To what extent did the learners using the ICT-Intervention gain more than they would have otherwise in respect to learning beyond what is required by school?
- To what extent did the learners using the ICT-Intervention gain more than they would have otherwise in respect to eagerness to attend school and satisfaction with school?
- To what extent do the answers to the above Class 4 questions vary by geographic region, by socio-economic characteristics of the learners, by gender, and by other characteristics that might influence effectiveness?
- To what extent has the ICT intervention enriched or harmed the teaching process?
- To what extent has the ICT intervention facilitated or impaired skill formation?
- To what extent has the ICT intervention expended opportunities for lifelong learning?
- To what extent has the ICT intervention improved or eroded educational planning and management
- To what extent has the ICT intervention advanced community linkages in the areas served by the Project compared to other areas?
|
|
|
Class 5 Evaluations: Degree of Subsequent Application (Limited to Learning Objectives) |
|
|
This Class of evaluation aims to determine the extent to which the ICT-Intervention is effective in preparing learners who subsequently apply the learned knowledge, attitudes, and skills in their later schooling, jobs, and social lives. If the ICT intervention is a pilot that is intended to test the implementability of the intervention or if it is limited to a specific technology, there may not be time to apply Class 5 before the pilot is modified and extended to a larger scale. However, if the pilot aims to change learner's subsequent application of knowledge, attitudes, or skill, then Class 5 Evaluation should be used as part of the evaluation of the pilot.
One of the claimed potential of ICTs is their ability to facilitate in the learners high level cognitive skills of application, problem solving, and learning how to learn. Learners may acquire considerable new capabilities as the result of an education or training intervention and yet not subsequently apply them. That can be attributed to many reasons:
- The intervention had not trained the learners in how to make the applications and ultimately learn on their own;
- The knowledge, attitudes, and skills learned are not relevant to the learners' subsequent lives;
- Circumstances in the learners' subsequent lives make the application difficult or unrewarding.
An example of the latter would be when learners increase their creative problem-solving abilities but subsequently work for companies that are run autocratically by top managers who discourage creative problem-solving.
Class 5 evaluations address the hard question: Has the enhanced learning made a difference in the subsequent thinking and behavior of the learners? Class 5 evaluation is even more complex and difficult to conduct that Class 4 evaluation. It aims to discover whether the intervention caused changes in how the learners think and behave in their lives several years after completing participation in the intervention. That is difficult to determine because the extent of application will be affected not only by the capabilities developed during participation in the ICT-Intervention but also by the capabilities acquired over the learners' entire schooling and by the circumstances of their subsequent lives. In addition, Class 5 evaluation requires keeping track of the intervention group learners and the control group learners through their subsequent lives and securing their cooperation for additional data collection, both of which can be difficult. Once the evaluators lose track of 30 percent of the learners in either group, it is hard to know how representative the located learners are of the initial groups.
Class 5 evaluations are rare. There are several reasons for that. It is presumed that if people learn something, they will subsequently apply it, a presumption that is frequently not correct. Class 5 evaluations are difficult and expensive to conduct. Policymakers who supported (or opposed) a new ICT-Intervention rarely remain in office long enough to request such evaluations, and even when they do, their attention has often focused on other matters. Nevertheless, Class 5 evaluations can be valuable. They go beyond the objectives set for the intervention to its intermediate level goals. In other words, Class 4 evaluation addresses outputs and Class 5 addresses intermediate outcomes. In essence, Class 4 assesses "merit" and Class 5 assesses "value." The ultimate goals or outcomes are addressed in Class 6 evaluations.
Class 5 evaluations usually do not begin until 1 - 10 years after participants have completed an ICT-Intervention. Normally, these evaluations are a follow-up to Class 4 evaluations, using the same treatment and control groups of that evaluation.
The evaluation of the application can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3 below). A list of important questions that may be addressed appears below (Box 6.5). The Evaluation Team may select from them as appropriate, and add their own. Evaluation of application of knowledge, attitudes, and skills may also include questions about the learners' characteristics that might also affect learning application.
Box 6.5 - Questions to Determine Degree of Effectiveness
- To what extent did the learners using the ICT Intervention apply more or less than they would have otherwise in each of the subject-areas enhanced by the intervention?
- To what extent did the learners using the ICT Intervention apply more or less than they would have otherwise in: Memorization of information, retrieval and storage of information, exploration, application, analysis, evaluation, and constructing or designing?
- To what extent did the learners apply more or less than they would otherwise the ICT Intervention in communication skills: reading, writing, listening, and speaking?
- To what extent did the learners using the ICT Intervention apply more or less than they would have otherwise the cognitive skills of critical thinking, problem solving, applying knowledge and skills to new situations, and learning on their own.
- To what extent did the learners using the ICT Intervention apply more or less than they would have otherwise in technology utilization?
- To what extent did the learners using the ICT Intervention apply more or less than they would have otherwise in respect to lifelong learning?
- To what extent do the answers to the above Class 5 questions vary by geographic region; by socio-economic characteristics of the learners; by gender of the learners: and by the family, job, and community contexts of the learners?
|
|
|
Class 6 Evaluation: Degree of National Effect |
|
|
This Class of evaluation aims to determine the extent to which the ICT-Intervention is effective in contributing to the nation's developmental goals. If the ICT intervention is a pilot that is intended to test the implementability of the intervention or if it is limited to a specific technology, there may not be time to apply Class 6 before the pilot is modified and extended to a larger scale. Moreover, it is hard to expect a noticeable effect on the nation's development goals generated by ICT interventions that are of short pilot life or small-scale.
In developing countries, large investments in ICT Interventions are usually undertaken with the hope that they will contribute to the country's development. Even when a Class 4 evaluation shows that all the educational objectives have been effectively fulfilled and a Class 5 evaluation indicates that an intervention has substantially increased the application of learned knowledge, attitudes, and skills, these results do not ensure contributions to national development. It may be that the educational objectives fulfilled and the applied capabilities were not the ones needed by the country or it may be that other factors countered these effects. A Class 6 evaluation seeks to determine whether the ICT intervention ultimately contributed to national development, including economic development, human resource development, poverty alleviation, and gender equity.
This is the most complex level of evaluation to conduct, because it necessarily must cover a long time-span over which many other factors will be affecting national development, both boosting and depressing it, and thus it is very difficult to determine the unique effects of the ICT Intervention. Usually these evaluations are based on case-study methods that examine many types of information from many sources, including longitudinal national indicator data, documentary records, and expert opinion.
Class 6 evaluations are rare because of the long period of time that must pass, their complexity, and because interest in a given intervention fades with time. They are important, however, because they examine whether an ICT-Intervention has contributed to its ultimate goals.
The evaluation of national effect can focus on many different questions. Each of these questions may be answered with multiple sources of data (see Section 3 below). A list of important questions that may be addressed appears below (Box 6.6). The Evaluation Team may select from them as appropriate, and add their own.
Box 6.6 - Questions to Determine Degree of Effectiveness
- To what extent did the ICT-Intervention boost or reduce economic development? How?
- To what extent did the ICT-Intervention boost or reduce human resource development? How?
- To what extent did the ICT-Intervention boost or hinder poverty alleviation? How?
- To what extent did the ICT-Intervention boost or hinder gender equity? How?
- To what extent do the answers to the above Class 6 questions vary by geographic region and by the socio-economic characteristics of the learners?
|
|
|
|
|
- Class 1, 2, and 3 evaluations require data collection at only one point in time and only from the ICT Intervention sites and participants. This is often called a cross-sectional design.
- Class 4 evaluations usually require data collection at two or more points in time, and from both intervention participants and from a comparable control group. These are called "randomized experiments" if learners, classrooms or schools are randomly assigned to either the intervention or control group. They are called "quasi-experiments" if there is no random assignment but other means are used to match those in the intervention group and in the control group. Occasionally there will be more than one intervention group, for instance, when two levels of intensity or duration of the intervention are to be evaluated.
- <>Class 5 evaluations usually follow up several years later on the same groups used in a Class 4 evaluation, but they collect follow-up data on the later life application of the taught knowledge, attitudes, and skills only once.
- Class 6 evaluations usually rely heavily on developmental indicators collected by a country for a decade or more prior to the intervention and for a decade or more after the first several cohorts of participants have completed the intervention.
|
|
3. Modes of Measurement of Evaluation |
|
|
There are many modes of data collection that can be used in the evaluations of ICT Interventions. The most likely ones are the following.
- Records: School or community center records that might be reviewed include staff employment records, procurement records, learner and instructor attendance records, repair records, and learners' files.
- Journals/diaries: Instructors, learners, or graduates may be asked to keep journals or diaries of their activities, thoughts, or feelings. The evaluators would then review this information.
- E-mail archives: E-mail messages can be archived and then periodically reviewed by the evaluators.
- Computer Web server logs: During web-based instruction, Web server logs can keep track of which web links the learners go to, how long they stay at each, and whether they return. The logs can do that for individual learners and also for the entire group of learners using the intervention and the entire control group.
- Surveys: Surveys are a relatively efficient way of collecting information and opinions from large numbers of administrators, instructors, or learners, but they don't allow probing particularly interesting or perplexing responses. They may be distributed and returned by mail or electronically.
- Interviews: Structured interviews are the same as surveys, but administered by someone who reads the questions, making them well suited for collecting from individuals who would be unlikely to return a survey and from those with limited literacy skills. Semi-structured interviews - which ask a series of specific questions and also have the interviewer probe some of the answers, - permit exploring well-formulated issues while also examining unexpected responses. Unstructured interviews, which are essentially conversations on a few broad topics, are a good way to explore general themes of interest to the evaluators.
- Focus groups: Focus groups bring a small number of people together to discuss sensitive matters in a supportive environment. If done well, the people often will provide more revealing information than they would in one-on-one interviews. If done poorly, some of the people will bias their responses to please the other people in the focus group.
- Observations: Observations guided by protocols or coding systems allow evaluators to determine the actual behavior of the administrators, instructors and learners, which during periods of change is often perceived and self-reported by the actors with considerable bias.
- Video recordings: This is a substitute for live observations. Now videos in learning centers can be transmitted over the Internet to evaluators located hundreds or thousands of miles away.
- Teacher tests: These are the most common way of assessing learning over short periods of time, but teachers vary considerably in the tests they construct and how they grade them.
- Embedded quizzes in computerized instruction: These allow quizzing learners at the optimum points during instruction. They can be automatically scored, providing immediate feedback to the students and a detailed record for the instructor. Looking at which items learners miss most often helps the ICT-Intervention developers and instructors identify where the learning system needs improvement. They cannot be used for questions that require written responses.
- National or standardized tests: These tests are commonly used to assess academic knowledge and skills after a year or more of instruction. They focus on knowledge and skills considered widely important throughout a country, and thus will not cover new objectives that may be targeted by an intervention. "Normed" tests are designed to rank order people according to given capabilities. "Criterion-based" tests are designed to determine whether a given person has mastered a certain body of knowledge or skills.
- Psychometric affective instruments: These instruments measure values, attitudes, and predispositions. They are developed by sophisticated procedures similar to those used to in standardized achievement tests.
- Performance assessments: These judge complex skills by having learners demonstrate their capabilities in real-world or simulated real-world situations. For instance, to assess students' abilities to design and conduct scientific experiments, the learner might be asked to do that for a given hypothesis with the equipment provided at a workbench.
Not all these modes of data collection are likely to be appropriate for all six classes of evaluation that have been described in this Tool. The following table shows the classes for which each mode is most likely to be appropriate.
Table 6.1 Modes of measurement Appropriate for Different Classes of Evaluation
Form of Data Collection |
Class 1 |
Class 2 |
Class 3 |
Class 4 |
Class 5 |
Class 6 |
Records |
Y
|
Y
|
|
|
|
|
Journals/diaries |
Y
|
Y
|
Y
|
|
Y |
|
E-mail archives |
Y
|
Y
|
Y
|
|
Y |
|
Computer web server logs |
Y
|
Y
|
Y
|
|
|
|
Surveys |
Y
|
Y
|
Y
|
|
Y |
Y
|
Interviews |
Y
|
Y
|
Y
|
|
Y |
Y
|
Focus groups |
|
Y
|
Y
|
|
Y |
Y
|
Observations |
Y
|
Y
|
Y
|
|
Y |
|
Video recordings |
Y
|
Y
|
Y
|
|
Y |
|
Teacher tests |
|
|
|
Y |
|
|
Embedded quizzes in computerized instruction |
|
|
|
Y |
|
|
National or standardized tests |
|
|
|
Y |
|
|
Psychometric affective instruments |
|
|
Y |
Y |
|
|
Performance assessments |
|
|
|
Y |
|
|
|
|
4. Management and Oversight of Evaluation |
|
|
While policymakers and planners do not need to know the details of how to conduct ICT-Intervention evaluations, they may need to know the following:
- How to identify a competent coordinator of evaluation,
- Time needed to conduct various classes of evaluation,
- How to estimate the costs of various evaluations,
- Key ethical considerations involved in evaluations,
- Oversight they might usefully exercise over the evaluations.
Selecting an Evaluation Director
The coordinator of any substantial evaluation requires both strong technical skills in evaluation and strong administrative skills. Usually the technical skills will require a master's degree-if not a doctoral degree-in the social sciences, such as psychology, sociology, public policy, education research, or program evaluation. Economists usually have little training in data collection procedures but advanced data analysis techniques. The technical skills developed during graduate training should have been further developed by progressively more responsible experience working on evaluations. The administrative skills seldom are taught in the above cited graduate programs, but may be acquired from other training in business administration or public administration, or from practical experience. The skill level needed by the evaluation coordinator becomes progressively higher with the ascending classes of evaluation described above. The following are characteristics that should be considered when appointing or selecting a coordinator of the evaluation:
- Candidate's undergraduate and graduate degrees and grades.
- Level of responsibilities the candidate held when previously doing evaluation work, and the quality of his or her performance,.
- Other evaluation experts' judgments of the quality of prior evaluations directed or worked on by the candidate
- Candidate's track record of completing evaluation work on time and within budge
- Probability that the candidate will remain with the evaluation(s) from the planning phase through to completion
Time Needed for the Evaluation
- Class 1, 2, and 3 evaluations usually can be completed in 1-6 months, providing that they do not require development of new psychometric affective instruments, providing a small evaluation team will not have to travel to many distant communities and schools, and providing the needed data analysis is fairly straight-forward.
- Class 4 evaluations usually require several months of advanced planning, then administration of "pre-measures" just before participants begin the ICT-Intervention and then "post-measures" as they complete it, with simultaneous administration in the control group. After that, the evaluators will usually need several months to check the data, analyze it, prepare a report draft, and revise it following review and feedback by others. Thus, if the intervention will take place for just one school year, a Class 4 evaluation will take about two years to complete. The time will be longer if sophisticated data collection instruments have to be developed, a small evaluation team will have to travel widely to collect the data, or the data analysis is complex. Sometimes an evaluation can be hastened by contracting out sophisticated instrument development and by hiring temporary staff to assist with extensive data collection. If the intervention extends for two or more school years, the evaluation team will have some down-time between administration of the pre-measures and administration of the post-measures, unless evaluation plan calls for interim measures at the end of each year or calls for following several cohorts of learners beginning the intervention each successive year.
- Class 5 evaluations are usually follow-ups of Class 4 evaluations initiated several years after the Class 4 evaluation is completed. In that case, they may require only about one year to complete. The time will be longer if sophisticated data collection instruments have to be developed, the participants in the treatment group and control group have dispersed widely, a small evaluation team will have to travel extensively to collect the data, or the data analysis is complex.
- Class 6 evaluations may take as little as a year or as much as several years.
Budgeting the Evaluation
The following seven factors have the biggest effect in determining the costs of an evaluation.
- Class of the evaluation: As briefly indicated above, Class 4, 5, and 6 evaluations are more complex to conduct than the lower class evaluations. Classes 4 and 5 generally use control groups. Class 6 requires intensive case studies, trying to control for the effect of the many other factors that could be boosting or suppressing the development goals during the years over which the ICT-Intervention was expected to be contributing to those goals. These complexities add to the cost of the evaluation.
- Number of evaluation questions: Under each class of evaluation described above are listed several potential evaluation questions. In some cases all the questions may be important to the ICT-Intervention stakeholders, but in other cases just a few might be targeted. The cost of the evaluation increases with the number of evaluation questions.
- Number, length, and sophistication of instruments developed: Some evaluations may require only administration of currently used national examinations, whereas others may require newly developed achievement tests and psychometric affective instruments. Such differences dramatically affect the costs of an evaluation. Tests that measure academic or occupational skills, particularly high order skills such as analysis, evaluation, and design skills, tend to be the most complex and time-consuming to develop. Similarly, performance assessments of these skills are complex to develop. Psychometric affective measures usually require two or more rounds of development, field-testing, and data analysis.
- Expertise needed to administer instruments: Many instruments are simple to administer and can be mailed to schools or community centers where the teachers or facilitators administer them. Others are more complex and require specially trained personnel. This is the case of semi-structured interview guides, focus group scripts, most observational coding forms, and performance assessments. To administer these types of instruments, the staff or temporary employees have to be trained, have to travel to the intervention sites (except interviews might be conducted by phone), and have to receive some supervision and monitoring of their work to assure quality control. In addition, in high stakes evaluations, it may be desirable to send staff or temporary employees into the field even when easily administered instruments are conducted, in order to prevent cheating and fraud.
- Number and distance of the data collection sites: Even when mailing out instruments to teachers or facilitators who will administer them, the costs will rise with the number of sites and their distance from the evaluation headquarters. If specially trained staff or temporary employees have to administer the instruments, the costs will rise much more rapidly because of travel expenses.
- Number of people from whom data is collected: For a given set of data collection sites, the costs will increase in proportion to the number of people from whom data is collected. When instruments can be administered by the teachers or facilitators, the incremental costs will mostly result from the cost of printing and mailing of additional copies of the instrument. When the instruments have to be administered by a specially trained person either individually (such as with interviews or performance assessments) or in small groups (such as with focus groups), the incremental costs will be high, including a pro-rata share of the person's salary and benefits as well as the hotel and meal expenses.
- Complexity of the data analysis: The complexity of the data analysis is partly a function of the Class of the evaluation, the number of evaluation questions addressed, the number of instruments administered, and the number of items on each. But it is also a function of whether the contexts of the intervention implementation are to be assessed, the number of subgroups of learners for whom the results are to be computed and compared, and the types of statistical controls for exogenous forces that might be applied.
The following categories should be considered when budgeting for an evaluation:
Box 6.7 - Evaluation Budget categories
Staffing (salaries and benefits)
- Evaluation coordinator
- Regular evaluation staff
- Temporary data collection and data entry employees
- Consultants
Office Space and Equipment (rental and purchases)
- Office spaces
- Furniture
- Telephones (phones, installation fees, local service fees, long distance charges)
- Fax machine (machine, installation fees, monthly fee)
- Computers
- Computer software
- Internet connection (installation fee and monthly fees)
- Copy machines
- Scanner for inputting data from instruments
Travel (transit, lodging, and meals)
- For data collection
- For reporting to government officials
- For reporting at professional and scholarly meetings
Data Collection Instruments
- Purchase of commercially distributed instruments: Needed copies of instruction manuals, booklets, and response sheets
- Layout and graphics for instruments developed by the evaluation team
- Copying, collating, and stapling of instruments developed by the evaluation team
Other
- Miscellaneous office supplies
- Postage
- Paper (if instruments are copied in-house)
Reserve for Contingencies (5-10 percent)
|
Ethical Considerations
Policymakers and planners have a role to play in assuring ethical conduct during an evaluation. They should take measures to assure the following:
- Protect evaluators from outside pressures: The policymakers, planners, and developers responsible for an ICT-Intervention often are eager for positive evaluation findings, but incorrect evaluation results will not serve the national interest. Arrangements should be made to protect the evaluators from pressures to bias the evaluation.
- Preclude conflicts of interest on the part of the evaluators: The proposed evaluation staff might have conflicts of interest because of family or business ties to the most powerful proponents of the ICT-Intervention or the developers. Proposed staff should complete conflict of interest disclosures and they should be reviewed before final decisions on staffing.
- Assure needed competence and resources: If the evaluation coordinator and key staff do not have the needed competence, or do not have needed resources (time, access to schools or community centers, and funding) for the evaluation, the results are likely to be invalid and misleading.
- Require protection of human subjects: It should be mandated that the evaluation staff do not use procedures that might pose harm to the educators and learners participating in the evaluation. Assurance of confidentiality and anonymity for teachers' and learners' self-reports about implementation, proper use, and user satisfaction is likely to improve the accuracy of the data. Once those assurances are made, the evaluators should take steps to protect the confidentiality and anonymity of the data. Policymakers and planners should refrain from doing anything to compromise those efforts.
- Permit acknowledgement of the evaluations' shortcomings: All evaluations have some shortcomings. Permitting the evaluation staff to acknowledge those in the report is in the best interest of fully informed decisions that might be based on the report.
- Arrange for limited outside review of the draft report: Many people will have been involved in any substantial ICT Intervention and they will have different perspectives about the intervention. A draft evaluation report should be reviewed by a few policymakers or planners, a few administrators of schools or community centers where the intervention was implemented, a few teachers using the intervention, and a few outside evaluators. Their suggestions should be given serious consideration, but the evaluation coordinator should make the final decisions of revisions.
- Require public dissemination of the final report: Government sponsored ICT Interventions are a public investment, and the public should have access the evaluations (usually Class 4 and above.)
Monitoring of the Evaluation
There should always be some higher level oversight of important ICT-Intervention evaluations. On the other hand, that oversight should not overrule on technical matters. The following are critical decisions and junctures that the oversight might address:
- Class of the evaluation to be conducted and the evaluation questions
- Qualifications of the person who is selected to coordinate the evaluation
- A draft of the evaluation plan
- A draft of the instruments that are to be used
- Results of the field-tests of new instruments
- Whether preparation for the initial data collection is on schedule
- A draft of the final report, and its release to the public
|
|
|
|
Monitoring and Evaluation of ICT in Education Projects: A Handbook for Developing Countries
http://www.infodev.org/files/2942_file_M_E_ICT_Education_draft_WSIS_optimized.pdf
This volume is intended as an introduction and guide for busy policymakers and practitioners grappling with how to understand and assess the ICT-related investments underway in the education sector. It includes the following chapters:
- Monitoring and Evaluation of ICT for Education: An Introduction.
- Monitoring and Evaluation of ICT for Education Impact: A Review Core Indicators for Monitoring and Evaluation Studies for ICT in Education
- Developing a Monitoring and Evaluation Plan for ICT in Education
- Pro-Equity Approaches to Monitoring and Evaluation: Gender, Marginalized Groups and Special Needs Populations
- Dos and Don'ts in Monitoring and Evaluation
ICT Indicators - UNESCO-Bangkok
www.unescobkk.org/education/ict/v2//info.asp?id=10937
This portal links to a wide array of resources that can be of help when planning evaluations of ICT Interventions. There are detailed examples of country indicators of ICT use and impact in education, examples of national standards for evaluation, and tools for measuring the impact of ICT in education.
Development Gateway
www.developmentgateway.org
Click on "Advanced Search" near the upper left. In the "Search For" window, type "evaluation," and beside "Select Topics" scroll to and click "E-Learning." This will provide an annotated list of web-based resources related to the evaluation of computer mediated education and training, with links to each resource.
PLUM
http://iet.open.ac.uk/plum/evaluation/plum.html
This site, developed by the British Open University and University of Hull, is intended to help non-evaluators plan and conduct simple evaluations of ICT Interventions. Click on the "Contents Page" to reach the links.
GEM: Gender Evaluation Methodology for Internet and ICTs
www.apcwomen.org/gem
This web site provides guidance on incorporating gender analysis in evaluations of ICT-Interventions.
Evaluating Computer and Web Instruction
Gregg B. Jackson
http://www.techknowlogia.com/TKL_active_pages2/ CurrentArticles/main.asp?IssueNumber=10&FileType=HTML&ArticleID=256
This short article identifies new opportunities within web technologies for evaluating web-based instruction. These include web server logs that can keep track of each learners' use of given web resources, video recording of networked classroom activities, web-based surveys, automatically scored quizzes, and simulations used for performance assessments.
Integrated Rural Development and Universal Access: Towards a Framework for Evaluation of Multipurpose Community Telecentre Pilot Projects Implemented by ITU and Its Partners
http://www.devmedia.org/documents/Ernberg.htm or
http://www.itu.int/ITU-D/univ_access/telecentres/papers/guelph.html
This framework for evaluating telecentres is intended to answer the following evaluation questions: "Does access to ICTs in rural areas contribute to social, economic and cultural development and, if so, how and what are the benefits? Are there any adverse effects and, if so, which? Do the Multipurpose Community Centers (MCCs) provide a sustainable way of providing universal access to ICTs and what are the conditions that must be met to make them economically viable and replicable? What are the best practices for the organization, management and operation of MCCs? The Annexes include a long list of indicators of community contexts, and three questionnaires that might be given to users, including one that asks about their perceptions of impacts.
A Educator's Guide to Evaluating the Use of Technology in Schools and Classrooms
Sherri Quinones and Rita Kirshstein. Washington DC: U.S. Department of Education.
www.ed.gov/pubs/EdTechGuide/title.html
This is a simple step-by-step guide to help educators lacking professional training in evaluation to conduct evaluations of ICT Interventions in schools. It could allow them do acceptable Class 1, 2, and 3 evaluations, but is not likely to result in competent Class 4, 5, or 6 evaluations.
Sun Associates: Evaluating the Impact of Technology on Teaching and Learning
http://www.sun-associates.com/eval/sample
This website is offered by a private U.S. company that conducts evaluations of ICT Interventions. The company makes some of its web-based resources available publicly at this site.
Distance Education: Guidelines for Good Practice
American Federation of Teachers
http://www.aft.org/higher_ed/downloadable/distance.pdf
This document proposes and explains 14 standards for college and university use of web-based distance education. Those could be used as some of the criteria by which such instruction could be evaluated. The document also includes the survey form distributed to faculty members to solicit their opinions about the use of web-based distance instruction in higher education.
Formative Evaluation for Educational Technologies
Barbara N. Flagg. Hilldale, NJ: Lawrence Erlbaum Associates, 1990.
"Formative evaluation" includes Class 1, 2, and 3 evaluations, as described in this Tool. This book provides a good general introduction to the methods of such evaluations and five case studies describing specific evaluations used for large ICT Interventions.
Evaluating Educational Technology: Effective Research Designs for Improving Learning
Geneva D. Haertel and Barbara Means. NY: Teachers College Press, 2003.
This book focuses on designs to determine the impacts of ICT Interventions-designs that would be used for the Class 4 and Class 5 evaluations described in this tool. There are nine chapters, with several written by internationally known and esteemed evaluators.
Usability Evaluation of Online Learning Programs
Claude Ghaoui. Hershey, PA: Information Science Publishing, 2003.
This book describes evaluations undertaken during prototype development of computer learning systems as well as those that correspond to Class 1, 2, 3, and 4, as described in this Tool. A few of the articles are theoretical, but most describe the evaluation of a specific ICT Intervention. The authors are predominantly European.
|
|
|