[The SEP is being updated to reflect what we have learn from capacity building with programs globally. We apologize for any inconvenience this month (Sept 2020) as updates are made, and the flow of the Protocol is interrupted]
———————————-
This Guide to the Systems Evaluation Protocol is intended to serve several purposes: it is first and foremost designed to be a step-by-step guide for both program and evaluation professionals who wish to integrate a systems evaluation perspective into their evaluation work in order to enhance the quality and appropriateness of program evaluations. The information provided in this Guide is designed to be comprehensive enough to allow the non-professional evaluator to use the materials and in-depth enough to serve as a useful reference for the professional who is new to the Systems Evaluation Protocol (frequently referred to simply as our “Protocol”).
This Protocol was created in the context of education and outreach programs generally and specifically for programs in Science, Technology, Engineering and Mathematics (STEM) education sponsored by the National Science Foundation (NSF) and in programs sponsored by Cornell Cooperative Extension. While many of the examples will be related to STEM and Extension education and outreach contexts, we have designed the Protocol to be generally applicable for any type of program evaluation context and we hope that a broader audience will find it useful.
We start from the assumption that the basic unit of interest in the use of this Protocol is a “program.” The term “program” might be defined generally as “a series of activities conducted with the intention of producing some effect (outcomes) on participants.” But even though this is the focal unit, it is important to recognize that from a systems perspective a program is always a part of a larger whole and is a whole to its subparts. That is, programs are often parts of collections of similar programs (or program areas) that are parts in organizations which are in turn parts of larger networks and systems. And programs have parts that consist of activities, people (both deliverers and participants), and so on. This Protocol continually incorporates these multiple system levels into the focus on a specific program.
The Introduction below goes beyond simply laying the groundwork for the steps of this Protocol, and it is here that we address a second goal – that of providing an overview of the “Systems Perspective” that shapes our approach to evaluation. The Systems Evaluation Protocol (SEP) has its foundations in the literatures of evaluation theory, systems theory, and evolutionary epistemology.
For readers who are interested in learning more about our systems approach to evaluation, the Introduction should provide some insight and knowledge about the theoretical underpinnings of our Protocol. Throughout the Protocol there are green sidebars that will be of interest to systems evaluation theorists. These are added for their supplementary value and will enhance understanding of the foundations of our approach. However, the reader should be able to use the Protocol even without this material.
At the same time, we hope that practitioners who simply want to start in and walk through utilizing the Protocol are able to do just that by beginning in Section II: The Systems Evaluation Protocol.
The systems perspective that shapes the Protocol highlights the value of having multiple voices and perspectives included in the evaluation process. Accordingly, this Guide is written with the assumption that the steps will (usually) be completed by a working group made up of internal program staff and possibly some external stakeholders who are close to the program. The working group may be large or small, balancing the risk of being unwieldy against the benefits of multiple perspectives.
It is important that a lead person be designated to guide the process. This could be someone hired by the organization as an external evaluator, an internal staff member assigned to the task, or someone selected by the working group. We will refer to this person as the “Evaluation Champion.” This term refers to the person in this role, rather than to any specific professional title or qualification. Because the Protocol consciously adopts a systems point of view, it is not sufficient from that perspective to think only in terms of a program as an isolated entity. Instead, the Evaluation Champion should be thought of not only as a facilitator of the Protocol but as a driving force behind the creation of an evaluation culture in the participating organization that provides a key system context for programs.
The appendices offer sample materials and worksheets for many of the steps in the Protocol. Throughout the Guide there are red inset boxes describing activities that could be used to guide a working group through a particular step in the Protocol. These are optional and are meant to provide suggestions or ideas on the process and can be adapted as needed.
For more information about evaluation methodologies, terms and other background information the reader may wish to refer to the Research Methods Knowledge Base, a comprehensive web-based textbook that addresses all of the topics in a typical introductory undergraduate or graduate course in social research methods. The Research Methods Knowledge Base can be found online at: www.socialresearchmethods.net.
Finally, this Guide is meant to be a snapshot of where we are now in the development of this approach in early 2012. As a second edition, this guide has evolved significantly from the original version. Much of the new content was created as a result of implementing the Protocol with several initial cohorts of programs and staff. Their experiences and feedback, as well as our learning, are reflected here. In addition, because we have continued to work with our program partners past the planning phase, this Guide will be supplemented with separate sections that cover evaluation implementation as well as utilization. These sections will be incorporated into a single Guide in future.
We anticipate that this Protocol will continue to undergo changes as our understanding of system interactions evolves and becomes more succinct and that materials in the Appendix will be adapted to these changes over time. Our expectations are to make our materials available on our website (http://core.human.cornell.edu/research/systems/protocol/index.cfm) between publications of this volume, and we encourage feedback and discussion of our approach to systems evaluation.
For comments or questions please contact:
William M. Trochim
Professor, Policy Analysis & Management
Director, Cornell Office for Research on Evaluation (CORE)
wmt1@cornell.edu
CORE phone: 607-255-0397
Mailing Address:
Policy Analysis and Management
Martha Van Rensselaer Hall
Cornell University
Ithaca, NY 14853
Jennifer Brown Urban
Assistant Professor, Family and Child Studies
Director, Developmental Systems Science and Evaluation Research Lab (DSSERL)
urbanj@mail.montclair.edu
Mailing Address:
Montclair State University
1 Normal Avenue
4144 University Hall (FCST)
Montclair, NJ 07043