Evaluation Implementation – 1.05 Conduct Pilot Tests
Paraphrasing poet Robert Burns, “The best laid plans often go awry.” This is certainly true of evaluation. Pilot testing provides a crucial link between evaluation planning and evaluation implementation. Without pilot testing, it is more likely that your investments in creating a high-quality evaluation plan will be wasted when those plans go awry. It is also important to note that even with pilot testing, the contingencies of “real life” still might disrupt your evaluation plan’s implementation. However, pilot testing and subsequent plan revision also serve to prepare you to deal with what happens when you are actually conducting your evaluation.
Often, pilot testing is discussed in relation to measurement tools. Testing measures for credibility, accuracy, usefulness, and feasibility is extremely important, yet so is testing other aspects of your evaluation plan with those four criteria in mind. Below, we offer more detail about how to pilot test various plan components, and provide some potential pitfalls which pilot testing might help you avoid.
A sample may be composed of people, documents, or other units. The following guidance on pilot testing measures is offered by the Resource Center of the Corporation for National and Community Service and is useful for samples composed of people, using survey or interview instruments:
Pilot testing provides an opportunity to detect and remedy a wide range of potential problems with an instrument. These problems may include:
- Questions that respondents don’t understand Ambiguous questions
- Questions that combine two or more issues in a single question (double-barreled questions)
- Questions that make respondents uncomfortable
Pilot testing can also help programs identify ways to improve how an instrument is administered. For example, if respondents show fatigue while completing the instrument, then the program should look for ways to shorten the instrument. If respondents are confused about how to return the completed instrument, then the program needs to clarify instructions and simplify this process.
__________________________________________________
Pilot Test Steps
The following guidelines can be used to conduct a simple pilot test of an instrument.
1. Find at least 4 or 5 people from the same group of people from whom you will actually gather data (the target population).
2. Arrange for these people to complete the instrument under conditions that match as closely as possible the actual conditions under which the instruments will be administered when you collect performance measurement data for your program. Consider the time of day, the location, and the method. If it is a phone interview, then conduct the pilot test over the phone. If it is a mail survey, then make sure the pilot test is completed via the mail. Whenever possible, record the time it takes for respondents to complete the instrument so that you can inform the data collectors of the approximate time needed for respondents to complete the instrument.
3. After each respondent completes the instrument, take some time with the respondent to discuss his or her experience. The following are some questions you might want to ask.
- How long did it take you to complete the instrument?
- What do you think this instrument is about?
- For what purposes do you think this information will be used?
- What problems, if any, did you have completing the instrument?
- Are the directions clear?
- Are the instructions clear on what to do with the instrument after completing it?
- Is there any words/language in the instrument that people might not understand?
- Did you find any of the questions to be unnecessary or too sensitive?
- Were any questions difficult to answer?
- For specific questions:
- What do you think this question is asking?
- How would you phrase this question in your own words?
- Did the answer choices allow you to answer as you intended?
- Is there anything you would change about the instrument?
4. Collect the completed instruments. Read the responses. Did respondents interpret the questions the way you intended?
5. Analyze the data and present the results of the pilot test as you would when you actually administer the instrument. Will the results give you the information you need?
6. Share the results of your pilot test with other stakeholders who will be using the data. Does this instrument provide the data they need to answer their questions?
7. Modify your instrument based on the information you have gathered.
8. When reporting performance measurement results, be sure to describe any pilot testing you have done. This gives readers greater confidence in the results you report.
It may be tempting to skip the pilot testing step, but remember that you run the risk of collecting useless data. A pilot test usually reveals ways to improve an instrument. Once data are collected, it will be too late in the data collection cycle to fix problems that were missed because an instrument was not pilot tested.
__________________________________________________
Source: “Developing Performance Measurement Instruments” (2003). Corporation for National and Community Service website. Retrieved 6/19/2015 from http://www.nationalservice.gov/sites/default/files/resource/npm/developing-performance-measurement-instruments.pdf
The guidance offered above refers primarily to pilot testing your measurement strategy and your measures, yet it also touches on testing your plans for sampling (Step 1), analysis (Step 5) and even reporting (Step 6). In some cases where doing a full pilot test isn’t feasible (e.g., sampling), build enough time into your overall evaluation timeline to adjust your plan accordingly. If things aren’t working out (e.g., you expected to randomly sample 25 percent of your population of interest, yet after your initial contact with your sample you are only getting a 5 percent response rate)—if you allow ample time to resample, your evaluation will not be completely derailed. One more important component to pilot test, which is not mentioned above, is your data management plan. Establish and implement a clear plan for how you will manage data once they are collected, and test that plan while you test your measures. For example, set up a spreadsheet with the necessary column and row headings; as you or others enter data during the pilot test of measures, reflect on how well this data management plan will work once the actual evaluation is underway.
Returning to our evaluation planning mantra, “Well begun is half done”, we cannot overemphasize the importance of pilot testing in ensuring that your careful evaluation planning can lead to a high quality evaluation.