Role of A Social Scientist, Incorporating Social Science Theories
- Draw policy implications from evaluation research by compiling separate summaries to multiple stakeholders with knowledge and information that best interest them. Make recommendations for future programs from the data of evaluation results. (Shadish, 1991, p.205-206) Strength and Weakness of Weiss’ Position: Weiss further differentiates the role of evaluator from the role of a researcher by addressing the complex political context that besets social programs. She warns evaluators against political naivety and urges them to do evaluation that can be used in policy-making, in the form of “enlightenment” rather than “instrumental use”. The educator role she assigns to evaluators reflects her pragmatic view of evaluation and suggests a new mode that evaluation can be used. However, the role of evaluator proposed by Weiss has some intrinsic flaws. First, such role fails to consider of the variety of different contexts (ironically).
For instance, the decision for the state or federal government to hire an educator is often not for the purpose of “being educated”, but to get concrete data regarding the program effect. The proposal to conduct “scientifically based evaluation” made by the Department of Education is a good example of that. As a result, an evaluator who uses case studies to describe the program input, implementation and long-term effect might not by appreciated by policy-makers in this context. Secondly, her emphasis on providing information to policy-makers poses the danger of evaluators becoming the servant of that particular stakeholder group. What should be the role of an evaluator when the interests of different stakeholder groups conflict with each other and speaking for the underrepresented group might limit the use of evaluation results in the policy-making process? 2.5 Rossi Rossi didn’t give an explicit definition about the role of evaluators. Rather, the roles an evaluator shall play might vary according to different stages of evaluation.
For example, in the program conceptualization stage, an evaluator sometimes takes the role of a social scientist, incorporating social science theories into the development of an intervention model. (Shadish, 1991, p.389-391) In the stage of program implementation, an evaluator works as a program administrator, making sure the program is implemented as expected so as to “rule out faulty implementation as a culprit in poor program outcome”. (Shadish, 1991, p.381) Besides, the operational data collected this way can also be useful for the future dissemination of the program. When determining the program utility, an evaluator will take the roles of a methodologist and a project manager, who selects and applies appropriate research methods to assess the impact of program intervention as well as conducts efficiency analysis about the program such as cost-benefit and cost-effectiveness analysis. (Rossi & Freeman, 1985, p.327-328)
The different types of social programs will also affect the roles an evaluator plays during the evaluation. Rossi has categorized social programs into three types: innovative, established, and fine-tuning programs. For instance, when evaluating innovative programs, much emphasis is given to the conceptualization of the program. (Shadish, 1991, p. 404) An evaluator’s responsibility will include setting program objectives and constructing an impact model between program objectives and activities, which should be based on not only the stakeholder’s views, but also the results of needs assessment and social science theories. However, conceptualization is rarely the focus of evaluation of established programs since their conceptual frameworks already exist and are less likely to change. (Shadish, 1991, p. 404) Instead, an evaluator will take a more summative approach and much of his/her responsibility falls into judging the program accountability. The role of an evaluator is less summative in fine-tuning programs, with emphasis on identifying the needs for change and formative modifications.
Rossi’s attempt to integrate the works of various theorists into one theoretical framework also helps to shape his position on the issue of the proper role for an evaluator. Rossi appreciates the strengths of different roles proposed by other theorists and assign such roles to the contexts that best fit them. The “good enough rule” of doing evaluation proposed by Rossi also frees evaluators from the everlasting debate of internal validity vs. external validity, quantitative methods vs. qualitative methods, descriptive value vs. prescriptive value. Instead, it allows evaluators to choose the best possible design through assessing all kinds of “trade-offs”. Strength and Weakness of Rossi’s position: In my opinion, Rossi’s stance on the role of evaluator is closest to the reality, since evaluation by nature is highly context-based. The nature of different social programs, clients’ expectations, evaluators’ backgrounds and expertise, the employment status, available resources, and restraints, as well as the influence of culture and politics can all result in quite different approaches towards doing an evaluation. Even for the same evaluation, activities in different stages will require various competencies from an evaluator. As a result, there shouldn’t be only one proper role for an evaluator and Rossi’s attempt to integrate different roles into one theoretical framework is plausible. However, Rossi is less explicit when linking a specific context to a specific role and linking the specific role to a set of responsibilities that relates to such role.
International Education Studies Vol. 3, No. 2; May 2010
2.6 A Comparative Analysis of Different Theorists’ Positions Looking beyond the different metaphors used by theorists to express their opinions on the roles of evaluator, a comparative analysis will be done in this section to dissect such roles into specific behaviors that an evaluator should do in different phases of an evaluation. Those phases are listed in the table below as: program selection, criteria selection, data collecting, evaluation findings, evaluation use and dissemination. (See table 1) As we can see from the table, different roles an evaluator takes can result in quite different approaches of doing evaluation in certain phases while still share a lot of similarity in the other phases.
All the citations in the following table are from the book Foundations of Program Evaluation. (Shadish et al, 1991) Insert Table 1 Here 3. Proposed Resolution of the Fundamental Issue The current debate among evaluation theorists regarding the proper roles for an evaluator reflects their different stances on other fundamental issues such as the value of evaluation (descriptive vs. prescriptive), the methods of evaluation (quantitative vs. qualitative), the use of evaluation (instrumental vs. enlightenment), the purpose of evaluation (summative vs. formative). My own resolution on the issue of evaluator’s role will also be based on my understanding of those fundamental issues. Value: an evaluator should prioritize the values from different stakeholder groups when selecting the criteria of merit for evaluands. Not only does the descriptive values approach reflect the concept of a plural democracy, it also orients the evaluation questions towards the concerns of its stakeholders thus makes it more likely that the evaluation finings will be used by them.
By considering different opinions regarding the values of the program, an evaluator can also reduce the personal bias, which is hard to avoid when the evaluator is the one who assigns values. However, in case different stakeholders can’t reach an agreement on the issue of value, an evaluator should take measures to make his/her judgment about the program value. For example, the evaluator can conduct a needs-assessment to identify the primary stakeholder group who will be affected most by the program and prioritize their values when selecting the criteria of merit for the evaluation. Methods: an evaluator should be familiar with quantitative and qualitative methods and accept them both as available methods for conducting evaluation. However, an evaluator should meet with clients before evaluation and get their opinions on the suggested method. If the clients have a strong opinion to get some quantitative data and have some nice charts in the final report, then the case study is not an ideal method. If the clients want the program to suffer from minimum intrusion from the evaluation, then experiment or quasi-experiment design shouldn’t be considered as first options. However, this is not to say that evaluators should discard the methods they consider best for the evaluation immediately if such methods are not accepted by clients; but it is essential for evaluators to convince their clients of the proposed methods for evaluation and reach an agreement before applying any method. Use: an evaluator should emphasize the instrumental use of his/her evaluation findings and actively promote the dissemination of the evaluation results.
It is hard to imagine that an evaluation will be initiated with no intent to know the effect of a social program, especially when such program costs a lot of tax-payers’ money and impacts a large population. In my opinion, ignoring the instrumental use of evaluation is highly irresponsible and has detrimental impact on the profession of evaluation: why should I hire someone to evaluate a program if he/she cannot tell me if the program is working or not? The enlightenment use of evaluation sounds promising but has its limitation in actual practice. First, it is hard to determine the scope of data collecting. With potential users of evaluation findings in obscurity, it is hard to know what specific data will be useful for them. Second, it is hard to enlighten people without telling them what works in the program and what does not. We have to make a judgment about success and failure if we want others to learn from the success or the failure. Last but not least, clients’ expectation and time restraint often make conducting evaluation for enlightenment impossible. It takes time to measure the program input, implementation, outcome, long-term impacts peoples’ attitude, etc. But most clients do not have so much time for an evaluation. Purpose: I prefer a more summative role for an evaluator for the following reasons. First of all, an evaluator is hired for his/her expertise in making rational judgments. If clients want only the vicarious experiences or a set of quantitative data, they can just hire some writers or statisticians to do the job. Second, even for the purpose to make a program better there is a difference between an evaluator and a program manager. At some point, an evaluator will give a summative opinion about what works and what does not in the program. An evaluator can make recommendation about how to make improvement; but ultimately it is up to the program manager to make that decision based on his/her knowledge about the program such as the available resources or people’s commitment. Lastly, being too involved in improving a program might hinder the objective and rational
International Education Studies www.ccsenet.org/ies