There are many modes of data collection that can be used in the evaluations of ICT Interventions. The most likely ones are the following.
- Records: School or community center records that might be reviewed include staff employment records, procurement records, learner and instructor attendance records, repair records, and learners' files.
- Journals/diaries: Instructors, learners, or graduates may be asked to keep journals or diaries of their activities, thoughts, or feelings. The evaluators would then review this information.
- E-mail archives: E-mail messages can be archived and then periodically reviewed by the evaluators.
- Computer Web server logs: During web-based instruction, Web server logs can keep track of which web links the learners go to, how long they stay at each, and whether they return. The logs can do that for individual learners and also for the entire group of learners using the intervention and the entire control group.
- Surveys: Surveys are a relatively efficient way of collecting information and opinions from large numbers of administrators, instructors, or learners, but they don't allow probing particularly interesting or perplexing responses. They may be distributed and returned by mail or electronically.
- Interviews: Structured interviews are the same as surveys, but administered by someone who reads the questions, making them well suited for collecting from individuals who would be unlikely to return a survey and from those with limited literacy skills. Semi-structured interviews - which ask a series of specific questions and also have the interviewer probe some of the answers, - permit exploring well-formulated issues while also examining unexpected responses. Unstructured interviews, which are essentially conversations on a few broad topics, are a good way to explore general themes of interest to the evaluators.
- Focus groups: Focus groups bring a small number of people together to discuss sensitive matters in a supportive environment. If done well, the people often will provide more revealing information than they would in one-on-one interviews. If done poorly, some of the people will bias their responses to please the other people in the focus group.
- Observations: Observations guided by protocols or coding systems allow evaluators to determine the actual behavior of the administrators, instructors and learners, which during periods of change is often perceived and self-reported by the actors with considerable bias.
- Video recordings: This is a substitute for live observations. Now videos in learning centers can be transmitted over the Internet to evaluators located hundreds or thousands of miles away.
- Teacher tests: These are the most common way of assessing learning over short periods of time, but teachers vary considerably in the tests they construct and how they grade them.
- Embedded quizzes in computerized instruction: These allow quizzing learners at the optimum points during instruction. They can be automatically scored, providing immediate feedback to the students and a detailed record for the instructor. Looking at which items learners miss most often helps the ICT-Intervention developers and instructors identify where the learning system needs improvement. They cannot be used for questions that require written responses.
- National or standardized tests: These tests are commonly used to assess academic knowledge and skills after a year or more of instruction. They focus on knowledge and skills considered widely important throughout a country, and thus will not cover new objectives that may be targeted by an intervention. "Normed" tests are designed to rank order people according to given capabilities. "Criterion-based" tests are designed to determine whether a given person has mastered a certain body of knowledge or skills.
- Psychometric affective instruments: These instruments measure values, attitudes, and predispositions. They are developed by sophisticated procedures similar to those used to in standardized achievement tests.
- Performance assessments: These judge complex skills by having learners demonstrate their capabilities in real-world or simulated real-world situations. For instance, to assess students' abilities to design and conduct scientific experiments, the learner might be asked to do that for a given hypothesis with the equipment provided at a workbench.
Not all these modes of data collection are likely to be appropriate for all six classes of evaluation that have been described in this Tool. The following table shows the classes for which each mode is most likely to be appropriate.
Table 6.1 Modes of measurement Appropriate for Different Classes of Evaluation
Form of Data Collection |
Class 1 |
Class 2 |
Class 3 |
Class 4 |
Class 5 |
Class 6 |
Records |
Y
|
Y
|
|
|
|
|
Journals/diaries |
Y
|
Y
|
Y
|
|
Y |
|
E-mail archives |
Y
|
Y
|
Y
|
|
Y |
|
Computer web server logs |
Y
|
Y
|
Y
|
|
|
|
Surveys |
Y
|
Y
|
Y
|
|
Y |
Y
|
Interviews |
Y
|
Y
|
Y
|
|
Y |
Y
|
Focus groups |
|
Y
|
Y
|
|
Y |
Y
|
Observations |
Y
|
Y
|
Y
|
|
Y |
|
Video recordings |
Y
|
Y
|
Y
|
|
Y |
|
Teacher tests |
|
|
|
Y |
|
|
Embedded quizzes in computerized instruction |
|
|
|
Y |
|
|
National or standardized tests |
|
|
|
Y |
|
|
Psychometric affective instruments |
|
|
Y |
Y |
|
|
Performance assessments |
|
|
|
Y |
|
|
|