Determining Criteria of Merit from Needs Assessment
- Determining criteria of merit from needs assessment. Criteria of merit of an evaluand should be its capacity to meet needs. Although an evaluator can use the results of needs assessment conducted by a program developer, sometimes he/she should do an independent needs-analysis. To avoid bias, Scriven advises evaluators to conduct “goal-free” evaluation and formulate questions by ignoring the program goals and looking for all possible effects an evaluand could have.
- Setting comparative evaluation standards. A set of standards should be created by evaluators to assess the program performance. Such standards are used for comparison, either comparison with a set level of performance, or with alternative programs. The latter comparison is preferred by Scriven since he believes that an evaluator will usually make decisions about choosing among alternatives.
- Assess program performance. An evaluator will need to answer both the evaluative and non-evaluative questions. Evaluative questions focus on the effects of the program and should be given top priority. The evaluator should acquire the skills to collect and analyze both experimental and non-experimental data.
- Offering a final evaluative judgment. An evaluator should synthesize his findings into a final report and offer his/her summative judgment. Strength and Weakness of Scriven’s Position: Scriven differentiates evaluators from researchers or social scientists by emphasizing that the value judgment is an integral part of an evaluator’s role and grounds such role
International Education Studies www.ccsenet.org/ies
in the logic of evaluation. His “goal-free” evaluation allows evaluators to identify possible side effects of the evaluand and address the concerns of underrepresented stakeholders. However, besides giving evaluators higher authority over different stakeholders in value judgment, Scriven fails to provide a solution to eliminate personal biases of evaluators. Metaevaluation proposed by Scriven is a good attempt but still it is highly subjective and requires years of experiences and expertise for an evaluator to make a non-biased judgment. For the novice evaluator, the decision of whose needs should be considered and which merit should take higher priority can still be very arbitrary. Besides, a complete goal-free evaluation is also highly unfeasible when an evaluator is hired by his/her clients and has an obligation to answer their specific inquiries. 2.2 Campbell Campbell believes that evaluators should play a role of methodologist during the program evaluation (Shadish, 1991, p.141). Evaluators should use scientific methodologies to design evaluative research that eliminate biases and establish a causal inference about a program and its hypothesized effects. This role of methodologist advocated by Campbell requires evaluators to employ a strong research design such as randomized experiment or good quasi-experiment to determine the causal effectiveness of the program. (Shadish, 1991, p.129) An evaluator should also distance him/herself from the program stakeholders and work independently to find out the facts about the program. As for the dissemination of the evaluation findings, an evaluator should “write honest reports for peers even if they cannot do so for funders or the public.” (Shadish, 1991, p.162) Last but not least, it is also the obligation of evaluators to play an active role in scrutinizing, replicating, and debating the evaluation results. Campbell’s emphasis on methods of measuring the program outcome makes him less concerned about assigning values to the program or facilitating the use of evaluation. As a result, he believes an evaluator is not responsible for doing the following:
- An evaluator is not obligated to assign value to the program being evaluated. Valuing of evaluation results should be left to the political process, not researchers. (Shadish, 1991, p.160).
- An evaluator shouldn’t promote use of her evaluation results actively “since this detracts from the credibility of the more factlike findings.” (Shadish, 1991, p.162).
- It is up to the policy maker, stakeholders to decide how to interpret, disseminate and use the evaluation results.
- An evaluator is not obligated to generate a different or modified program worth testing. Her job is simply testing the efficacy of existing programs.
- An evaluator should avoid evaluating institutions, social organizations, or persons due to the almost inevitable corruption pressure. (Campbell, 1984, p.41). Strength and Weakness of Campbell’s Position: The methodologist role Campbell assigns to evaluators is echoed with the proposal for conducting “scientifically based evaluation” as advocated by the Department of Education. The role of a methodologist as defined by Campbell focuses on the internal validity of the causal inference while is less concerned about the prescribing values and utility of the evaluation findings therefore is quite suitable for an external evaluation regarding program outcome. Such role for an evaluator will also greatly enhance the scientific nature of evaluation as a profession. Nevertheless, the weaknesses are also quite obvious for such role of an evaluator. First of all, it is hard to distinguish evaluation from other social science research if one sees an evaluator merely as a research methodologist. Apparently, not every social scientist can do a good evaluation. Secondly, doing a rigorous experiment design is preferable but not always feasible. The cost and time for doing a randomized control experiment, as well as its intrusion into program might result in fewer and fewer evaluation being done due to the reluctance from program administrators. Last but not least, the methodologist role restricts evaluators to study only the outcome of the program while missing other key information such as how the program is implemented, or which element of the program works and doesn’t work. As a result, an evaluator couldn’t give advice about how to improve the program or adapt the program to fit other contexts. 2.3 Stake Stake believes an evaluator should play a facilitator role during the evaluation. The evaluator should assist different stakeholders to “discover ideas, answers, and solutions within their own mind” by conducting
International Education Studies Vol. 3, No. 2; May 2010
responsive evaluation. (Stake & Trumbull, 1982, p.1) According to Stake, the responsibilities of an evaluator include:
- Identifying the stakeholders for whom the evaluation will be used: The evaluator should have a good sense of whom he is working for and their concerns. (Stake, 1975, as cited in Shadish 1991, p.273). Minority stakeholder groups should also be included to ensure justice and fairness.
- Spending more time observing the program and providing accurate portrayals of the program using case studies. Because case studies reflect the complexities of the reality, they help readers to form their own opinions and judgments about the case and they can be “useful in theory building”. (Stake, 1978, as cited in Shadish 1991, p.289)
- Conducting responsive evaluation which allows evaluation questions and methods to emerge from observing the program. In this approach, evaluators will orient evaluation directly to program activities than to the program goals and respond promptly to audience information requests.
- Presenting his evaluation findings in the “natural ways in which people assimilate information and arrive at understandings” so that the writings can reach maximal comprehensibility. (Stake, 1980, p.83) Stake doesn’t believe an evaluator should make a summative value judgment since there is “no single true value” for all the stakeholders of a program. (Stake, 1975, as cited in Shadish 1991, p.274) As a result, evaluators shouldn’t blindly accept state and federal standards and impose treatments on local programs since such standards are not pluralistic and might not be in the best interest of local people. (Shadish, 1991, 279) Stake also believes the responsibility of synthesizing and interpreting case studies lies in the readers rather than evaluators and it is up to the readers to resolve any conflicting arguments.
(Shadish 1991, p.293) Strength and Weakness of Stake’s Position: the facilitator role for an evaluator, as suggested by Stake, has two major strengths. First, it indicates the interest shift among evaluators from giving a summative judgment, whether it is a value judgment or an effect judgment, to generating useful information that can be used to improve the program. Secondly, it justifies new ways to conduct an evaluation (e.g. responsive evaluation, case study) and report its findings (e.g. narrative portrayal). However, Stake fails to consider clients’ expectations about the proper role for an evaluator. Will clients accept the case study as the only approach of investigation? Will clients allow evaluators to start evaluations without preordinate questions? Is it appropriate for an evaluator to completely ignore the state or federal standards when evaluating local programs?
All those doubts regarding the feasibility and validity of case studies or responsive evaluation will also undermine the social acceptance of the evaluator role proposed by Stake. 2.4 Weiss Weiss emphasizes the evaluator’s special role in promoting the use of his/her evaluation results, especially in the policy-making process. She is frustrated about the fact that “evaluation results have generally not exerted significant influence on program decisions”, and she argues that evaluation should start out with use in mind and evaluators shouldn’t leave the use of evaluation to the natural processes of dissemination and application. (Weiss, 1972, as cited in Shadish 1991, p.182-183) Weiss claims that evaluation “should be continuing education for program managers, planners and policy makers”. (Weiss, 1988, p.18) As a result, it seems that she sees the role of evaluator more as an educator, who conducts evaluation not for giving an explicit solution to a social problem, but for providing useful information to its potential users, policy-makers in particular. She urges evaluators to look beyond the instrumental use of evaluation results and conduct “enlightenment” research that “provides evidence that can be used by men and women of judgment in their efforts to research solutions” (Weiss, 1978, p.76) so as to maximize the utility of evaluation results. By doing evaluation this way, an evaluator should