Route One Evaluation

Leave a comment

Vigorous Evaluation

I was recently rereading Robert Stake’s chapter on Responsive Evaluation in Evaluation Roots when the word “vigorous” jumped off the page at me…not “rigorous” as we are so accustomed to seeing, but “vigorous.” Stake was discussing a seminal paper, The Countenance of Educational Evaluation, published in 1967 (the year I was born!). He wrote, “It was intended to stretch the minds of evaluators toward more vigorous collection of judgments and standards to indicate the merit and shortcomings of the evaluand.” This work evolved into what we now know as Stake’s Responsive Evaluation approach.

  • Vigorous is defined as “strong, active, robust.”
  • Rigorous is defined as “severely exact or accurate.”

If you are familiar with Responsive Evaluation, you know that it focuses on the experiences and perceptions of program stakeholders, understanding program activities, and understanding the context and culture within which a program lives. A responsive evaluation is organized to investigate key issues and tends to rely on observations and interviews, instead of instruments, indicators, and criteria. The approach has been criticized as putting too much weight on stakeholder accounts. However, as Stake points out, “Part of the program’s description, especially that about the worth of the program, is revealed in how people subjectively perceive what is going on—Placing value on the program is part of experiencing it.”

Let those words sink in. I am still letting them sink in.

So, when considering “vigorous” in the context of evaluation, I think of programs and their evaluations as being alive. Relying on instruments to assign value with precision does not work here. Instead, vigorous evaluation requires deep understanding of program, context, and stakeholders; triangulation; and serious sleuthing. For most (all?) things we evaluate, vigor is not only more important than rigor, but is a prerequisite for it.

Leave a comment

Should All Evaluators Be Ethnographers?

I have recently engaged in evaluation conversations that included talk of the importance of understanding what it is you are evaluating—really understanding—the kind of understanding driven by insatiable curiosity about a program, its place and people. You read everything you can find that helps you understand, from websites and program documents to research literature. And you become an ethnographer of sorts. Ethnography is defined as “the study and systematic recording of human cultures.” Ethnographers study, observe, participate, and talk to people to gain deep understanding of a phenomenon of interest. They immerse themselves. As evaluators, regardless of evaluation approach, design, and methods, the more we immerse ourselves for understanding, the better the evaluation will be.

Many years ago, I was leading the evaluation of the NAEP (National Assessment of Educational Progress) State Service Center. The Center provided training and support for NAEP state coordinators across the US and its jurisdictions. The Center provided 1- or 2-day trainings in Washington, DC throughout the year and an annual conference that spanned several days, held on the west coast. I was fortunate to be able to attend the trainings and conferences. My purpose for attending was to collect data—training surveys and an occasional focus group.

What I knew at the time was that the evaluation was better because I attended the trainings and conferences and got to know the NAEP state coordinators, as well as key NAEP and Center staff. This immersion alongside participants and staff increased my understanding of what I was evaluating.

What I know now is that I should have journaled my observations and reflections. Although this would not have been part of official data collection, it would have furthered my understanding. I would have been an evaluator-ethnographer, an important role for all who want to do this work well.

Leave a comment

Program Evaluation Distinctions

As I teach a program evaluation course this semester, and as someone who began teaching program evaluation nearly 20 years ago (!), I find myself once again reflecting on how I define program evaluation and differentiate it from related fields and practices. My definitions and distinctions have certainly evolved over the years. In addition to commonly held definitions and distinctions, I have come to advocate that program evaluation requires big picture, evaluative questions to guide the work; data collection methods and sources chosen for the purpose of addressing those evaluation questions; and multiple sources of data. If these criteria are not met, it is not program evaluation. My evolving thoughts are captured in a document posted here. I would love your feedback!

Leave a comment

Trust and Applicability of Research Findings: An Organizing Structure

Check out the latest addition to my Social Science Resources page, an organizing structure for addressing and assessing trust and applicability of research findings. This work began years ago as a handout I created for an introductory research methods course. It has  evolved with my thinking as I continue to reflect on what trustworthy and applicable research means. It includes quantitative and qualitative research…bridging the literatures for a holistic approach. And, I advocate for researchers to use stereotypical qualitative strategies for supporting trust and applicability in quantitative research studies. I appreciate any and all feedback that I can incorporate to improve this work. Thank you in advance!

Leave a comment

Humanizing Program Goals

I regularly teach a graduate course in educational program design and evaluation. One thing we work on is developing program goals. Understandably, students often default to developing goals similar to those they’ve seen before—that aren’t goals at all—that are actually metrics. They articulate goals as increases in standardized test scores or the number of books in the library or the attendance at parent nights…They mostly articulate goals as increases in standardized test scores.

I ask them if they chose education as a career because they wanted to increase standardized test scores. Of course not! *eyes roll*

Then we talk about all the great, valuable, powerful reasons they decided to teach and become educational leaders. This is the stuff program goals are made of.

On a recent flight I watched a TED Talk by Tristan Harris, How Better Tech Could Protect Us from Distraction. Tristan discourages us from typical design goals and encourages us to consider deeper human design goals that articulate our values. This resonated with me, capturing what I’ve been pushing with students, colleagues, and clients. Design goals, program goals, policy goals, product goals…rooted in human values.

Leave a comment

Evaluation Capacity Building as Superhero

As a program evaluator, I teach, research, and apply analytic methods. I help people identify, understand, and assess problems and solutions. This way of knowing and working is threatened by “problems” and “solutions” that aren’t based on systematic and sound analysis, but are simply sound bites that stir emotion. We need evaluation capacity building more than ever! The more people understand and value analytic methods, the less acceptable fake news, fake problems, fake solutions, and alternative facts will be. It’s a bird…it’s a plane…it’s Evaluation Capacity Building! ecb-hero-2

Leave a comment

Did you know that evaluability assessment aligns well with systems thinking and addressing program complexity?

Although evaluability assessment was introduced in the 1970s as a pre-evaluation activity to determine if a program was ready for outcome evaluation, it has evolved into an approach that can be used at any point in a program’s lifecycle and is no longer exclusively tied to quantitative outcome designs.

Tenets that are central to evaluability assessment are also central to addressing program complexity: stakeholder involvement, multiple perspectives, and feedback.

Mike Trevisan and I will present a session at next week’s American Evaluation Association conference, “Using Evaluability Assessment to Address Program Complexity,” in which we’ll describe some specific strategies that reinforce the centrality of stakeholder involvement, multiple perspectives, and feedback in evaluability assessment work. This will include discussion of case examples where these strategies were used.

We hope to see you there!