Evaluation (Eisner) involves a connoisseur or expert in a field
of study estimating the worth of a new innovation. Obvious biases
and threats to validity exist. (See Program
(Tyler, 1949) describes whether or not students have met their
goals, with the results informing how to handle a new instructional
strategy (i.e., revise, adopt, reject). One weakness is the evaluator
may overlook unexpected outcomes or benefits of instruction beyond
original goals. (See Program
Evaluation (Scriven) supplements inherent weaknesses in a goals-oriented
approach by providing an unbiased perspective of ongoing events. (See
Evaluation focuses on comparing or describing all sides of an
innovation, both positive and negative. Analogous to the defense and
prosecution of a court room. (See Evaluation
4-Level Model describes student "reactions" to and "learning"
from an innovation, as well as "behavior" changes in real job performance,
and other potpourri "results." (See Instructional
Systems Evaluation, Clark, 1997).
Evaluation describes the characteristics of varying contexts that
cause innovations to fail or succeed differently. Proponents of situated
evaluation argue that educational innovations are situated within
their context of use. (See Situated evaluation for cooperative systems, Twidale et al., 1994).
CIPP Model describes the "context" in which an innovation occurs,
the "inputs" of the innovation, the formative "processes" occurring,
and the summative "products" or outcomes. (See A
Design for Evaluation, Nova).