Route One Evaluation


4 Comments

Sliding Doors of Evaluation

Sliding Doors is one of my favorite movies. In case you missed this little gem from 1998, the story, or stories, are set as Helen (played by Gwyneth Paltrow) rushes to make a train. In one version, she makes it; in the other, she does not. From here, we follow Helen on two very different life journeys, all based on whether she made it through the train’s sliding doors…or not.

I love this movie because it highlights how seemingly insignificant daily acts can have huge ramifications…changing our direction or entire path…leading to different results and lives. I also love it because Helen’s transformation in one storyline is symbolized by going from long brown hair to a short blonde crop. In addition to signaling her freer, brighter self, I just adore this hairstyle! And, yes, I had a copycat cut!

slidingdoors

Anyway…Evaluation also has sliding doors. Throughout evaluation planning, implementation, and dissemination, we make numerous big and medium and little decisions. We are not even conscious of all the decisions we make. Yet each and every one of them impacts the direction and path of the evaluation…influencing what we learn from the evaluation, attitudes about the evaluation and what is being evaluated, and decisions that have ramifications for those involved in and expected to benefit from the program or initiative being evaluated. This is a tremendous responsibility. Be aware of sliding doors. And watch the movie.


Leave a comment

Vigorous Evaluation

I was recently rereading Robert Stake’s chapter on Responsive Evaluation in Evaluation Roots when the word “vigorous” jumped off the page at me…not “rigorous” as we are so accustomed to seeing, but “vigorous.” Stake was discussing a seminal paper, The Countenance of Educational Evaluation, published in 1967 (the year I was born!). He wrote, “It was intended to stretch the minds of evaluators toward more vigorous collection of judgments and standards to indicate the merit and shortcomings of the evaluand.” This work evolved into what we now know as Stake’s Responsive Evaluation approach.

  • Vigorous is defined as “strong, active, robust.”
  • Rigorous is defined as “severely exact or accurate.”

If you are familiar with Responsive Evaluation, you know that it focuses on the experiences and perceptions of program stakeholders, understanding program activities, and understanding the context and culture within which a program lives. A responsive evaluation is organized to investigate key issues and tends to rely on observations and interviews, instead of instruments, indicators, and criteria. The approach has been criticized as putting too much weight on stakeholder accounts. However, as Stake points out, “Part of the program’s description, especially that about the worth of the program, is revealed in how people subjectively perceive what is going on—Placing value on the program is part of experiencing it.”

Let those words sink in. I am still letting them sink in.

So, when considering “vigorous” in the context of evaluation, I think of programs and their evaluations as being alive. Relying on instruments to assign value with precision does not work here. Instead, vigorous evaluation requires deep understanding of program, context, and stakeholders; triangulation; and serious sleuthing. For most (all?) things we evaluate, vigor is not only more important than rigor, but is a prerequisite for it.


Leave a comment

Should All Evaluators Be Ethnographers?

I have recently engaged in evaluation conversations that included talk of the importance of understanding what it is you are evaluating—really understanding—the kind of understanding driven by insatiable curiosity about a program, its place and people. You read everything you can find that helps you understand, from websites and program documents to research literature. And you become an ethnographer of sorts. Ethnography is defined as “the study and systematic recording of human cultures.” Ethnographers study, observe, participate, and talk to people to gain deep understanding of a phenomenon of interest. They immerse themselves. As evaluators, regardless of evaluation approach, design, and methods, the more we immerse ourselves for understanding, the better the evaluation will be.

Many years ago, I was leading the evaluation of the NAEP (National Assessment of Educational Progress) State Service Center. The Center provided training and support for NAEP state coordinators across the US and its jurisdictions. The Center provided 1- or 2-day trainings in Washington, DC throughout the year and an annual conference that spanned several days, held on the west coast. I was fortunate to be able to attend the trainings and conferences. My purpose for attending was to collect data—training surveys and an occasional focus group.

What I knew at the time was that the evaluation was better because I attended the trainings and conferences and got to know the NAEP state coordinators, as well as key NAEP and Center staff. This immersion alongside participants and staff increased my understanding of what I was evaluating.

What I know now is that I should have journaled my observations and reflections. Although this would not have been part of official data collection, it would have furthered my understanding. I would have been an evaluator-ethnographer, an important role for all who want to do this work well.


Leave a comment

Program Evaluation Distinctions

As I teach a program evaluation course this semester, and as someone who began teaching program evaluation nearly 20 years ago (!), I find myself once again reflecting on how I define program evaluation and differentiate it from related fields and practices. My definitions and distinctions have certainly evolved over the years. In addition to commonly held definitions and distinctions, I have come to advocate that program evaluation requires big picture, evaluative questions to guide the work; data collection methods and sources chosen for the purpose of addressing those evaluation questions; and multiple sources of data. If these criteria are not met, it is not program evaluation. My evolving thoughts are captured in a document posted here. I would love your feedback!


1 Comment

Trust and Applicability of Research Findings: An Organizing Structure

Check out the latest addition to my Social Science Resources page, an organizing structure for addressing and assessing trust and applicability of research findings. This work began years ago as a handout I created for an introductory research methods course. It has  evolved with my thinking as I continue to reflect on what trustworthy and applicable research means. It includes quantitative and qualitative research…bridging the literatures for a holistic approach. And, I advocate for researchers to use stereotypical qualitative strategies for supporting trust and applicability in quantitative research studies. I appreciate any and all feedback that I can incorporate to improve this work. Thank you in advance!


Leave a comment

Humanizing Program Goals

I regularly teach a graduate course in educational program design and evaluation. One thing we work on is developing program goals. Understandably, students often default to developing goals similar to those they’ve seen before—that aren’t goals at all—that are actually metrics. They articulate goals as increases in standardized test scores or the number of books in the library or the attendance at parent nights…They mostly articulate goals as increases in standardized test scores.

I ask them if they chose education as a career because they wanted to increase standardized test scores. Of course not! *eyes roll*

Then we talk about all the great, valuable, powerful reasons they decided to teach and become educational leaders. This is the stuff program goals are made of.

On a recent flight I watched a TED Talk by Tristan Harris, How Better Tech Could Protect Us from Distraction. Tristan discourages us from typical design goals and encourages us to consider deeper human design goals that articulate our values. This resonated with me, capturing what I’ve been pushing with students, colleagues, and clients. Design goals, program goals, policy goals, product goals…rooted in human values.


Leave a comment

Evaluation Capacity Building as Superhero

As a program evaluator, I teach, research, and apply analytic methods. I help people identify, understand, and assess problems and solutions. This way of knowing and working is threatened by “problems” and “solutions” that aren’t based on systematic and sound analysis, but are simply sound bites that stir emotion. We need evaluation capacity building more than ever! The more people understand and value analytic methods, the less acceptable fake news, fake problems, fake solutions, and alternative facts will be. It’s a bird…it’s a plane…it’s Evaluation Capacity Building! ecb-hero-2


Leave a comment

Did you know that evaluability assessment aligns well with systems thinking and addressing program complexity?

Although evaluability assessment was introduced in the 1970s as a pre-evaluation activity to determine if a program was ready for outcome evaluation, it has evolved into an approach that can be used at any point in a program’s lifecycle and is no longer exclusively tied to quantitative outcome designs.

Tenets that are central to evaluability assessment are also central to addressing program complexity: stakeholder involvement, multiple perspectives, and feedback.

Mike Trevisan and I will present a session at next week’s American Evaluation Association conference, “Using Evaluability Assessment to Address Program Complexity,” in which we’ll describe some specific strategies that reinforce the centrality of stakeholder involvement, multiple perspectives, and feedback in evaluability assessment work. This will include discussion of case examples where these strategies were used.

We hope to see you there!


Leave a comment

Evaluation Approaches: You Don’t Need to Choose Just One

by Bernadette Wright, Guest Blogger

If you’re conducting an evaluation for your thesis or dissertation, I recommend Tamara Walser’s post, “Developing a Framework for a Program Evaluation Thesis or Dissertation,” here on Route One Evaluation. This informative piece highlights the essentials of graduate school writing on evaluation. I’d just like to expand on one thing, based on my evaluation experience.

You don’t have to choose one evaluation approach to follow like a template. Indeed, one way to demonstrate your competency as a researcher is to show your understanding of multiple methods by combining them. Yes: you can develop your own approach to fit your project, as long as you explain what you are doing to conduct the evaluation and how your approach is based on effective practices.

Off-the-Shelf Approaches Don’t Always Fit

People sometimes use different words to describe the same thing. Approach, design, or method, all may be used to describe your unique framework that guides all aspects of how you’ll go about conducting your research.

Within that overarching framework, you can choose from a bunch of theoretical models/approaches. You can find many examples in books, articles, and websites such as the BetterEvaluation, Research Methods Knowledge Base, and Free Resources for Program Evaluation and Social Research Methods websites. Some approaches that evaluators frequently use are case study, mixed methods, qualitative evaluation, field experiment, summative evaluation, and formative evaluation.

When you get into the real world, however, you will find that program managers want answers to many questions at once. In order to get meaningful information to help answer all the questions, you often need to take what you learn from the existing approaches to design an approach to fit the specific situation.

Three Ways to Construct Your Own Evaluation Approach

Here are three ways that you can create your own evaluation approach to fit the context and questions at hand.

Using Multiple Approaches

For my own dissertation, I followed the excellent advice of my advisor, Dr. Marv Mandell, for studying dialogue/action circles program on racism in the Baltimore Region. Instead of specifying a particular “approach,” I described what I would do to answer the questions and how what I was doing was based on effective evaluation practices. I guess you could call it a kind of “two-phase approach.”

  • In the first phase, I used what you could call a qualitative approach. I conducted individual and focus group interviews with stakeholders to identify evaluable goals that were relevant to stakeholders. This was based on general practices for conducting an effective evaluation, which recognize understanding the program you’re evaluating as an important step. (For more on this, see my recent com article on Understanding Your Program.)
  • In the next phase, I examined the impacts of the circles in terms of the goals identified in phase one. For that phase, I used what you could call a mix of two research methods. I 1) analyzed qualitative data from participant feedback forms and 2) conducted a case study of two circles. The case study involved a mix of unobtrusive observation and interviews with participants.

It was a lot of work, but I believe I got more useful results—and learned a lot more—than if I’d just followed one off-the-shelf approach.

For other evaluations that I’ve conducted and managed for customers since, we’ve used different approaches and methods and done things in a different order. Because each evaluation should fit the timeline, scope, and focus of the unique situation.

Using Parts of Different Approaches

As with most evaluations that I’ve been involved with, the evaluation that I conducted for my dissertation used parts of what evaluators often call “impact evaluation,” “process evaluation,” and “formative evaluation.” I examined what impacts the circles had, the goals and the most helpful and least helpful parts of the program, and stakeholders’ recommendations for forming future changes to the program.

Create a Customized Evaluation Approach

It is even possible to create a completely new approach. For his dissertation, my business partner, Steve Wallis, ended up inventing a new approach to evaluating and integrating theories, which led to the technique of Integrative Propositional Analysis. The technique builds on related streams of research across philosophy, studies on conceptual systems and systems thinking, and the technique of Integrative Complexity.

The same kinds of innovation are certainly possible for evaluating programs and policies.

Devise Your Own Evaluation Approach

Design a customized approach for the specific situation of your study. Remember, you are the expert on your specific study. And that’s what experts do. When you use a custom-fit approach, you’re more likely to get evaluation results that are highly useful for benefiting the program and the people it serves and filling knowledge gaps in your field.

Bernadette Wright, PhD is Director of Research and Evaluation at Meaningful Evidence, LLC, where she helps non-profit organizations to get the meaningful and reliable research they need to increase their program results and make a bigger difference.

bernadettew


Leave a comment

What do you want to know from program evaluation?

Program evaluation has evolved to serve many purposes. A distinction of basic purposes is well-captured by the concepts of allocative and operational efficiency described by Donaldson and Gerard in their 2005 book on healthcare economics.

My use of their work in program evaluation boils down to two questions:

  • Is this worthwhile to do?
  • How can results be maximized?

You can learn a lot from program evaluation during program development and throughout implementation by considering these two questions.