Route One Evaluation

Think Practice Impact


Leave a comment

Program Evaluation Distinctions

As I teach a program evaluation course this semester, and as someone who began teaching program evaluation nearly 20 years ago (!), I find myself once again reflecting on how I define program evaluation and differentiate it from related fields and practices. My definitions and distinctions have certainly evolved over the years. In addition to commonly held definitions and distinctions, I have come to advocate that program evaluation requires big picture, evaluative questions to guide the work; data collection methods and sources chosen for the purpose of addressing those evaluation questions; and multiple sources of data. If these criteria are not met, it is not program evaluation. My evolving thoughts are captured in a document posted here. I would love your feedback!


Leave a comment

Trust and Applicability of Research Findings: An Organizing Structure

Check out the latest addition to my Social Science Resources page, an organizing structure for addressing and assessing trust and applicability of research findings. This work began years ago as a handout I created for an introductory research methods course. It has  evolved with my thinking as I continue to reflect on what trustworthy and applicable research means. It includes quantitative and qualitative research…bridging the literatures for a holistic approach. And, I advocate for researchers to use stereotypical qualitative strategies for supporting trust and applicability in quantitative research studies. I appreciate any and all feedback that I can incorporate to improve this work. Thank you in advance!


Leave a comment

Humanizing Program Goals

I regularly teach a graduate course in educational program design and evaluation. One thing we work on is developing program goals. Understandably, students often default to developing goals similar to those they’ve seen before—that aren’t goals at all—that are actually metrics. They articulate goals as increases in standardized test scores or the number of books in the library or the attendance at parent nights…They mostly articulate goals as increases in standardized test scores.

I ask them if they chose education as a career because they wanted to increase standardized test scores. Of course not! *eyes roll*

Then we talk about all the great, valuable, powerful reasons they decided to teach and become educational leaders. This is the stuff program goals are made of.

On a recent flight I watched a TED Talk by Tristan Harris, How Better Tech Could Protect Us from Distraction. Tristan discourages us from typical design goals and encourages us to consider deeper human design goals that articulate our values. This resonated with me, capturing what I’ve been pushing with students, colleagues, and clients. Design goals, program goals, policy goals, product goals…rooted in human values.


Leave a comment

Evaluation Capacity Building as Superhero

As a program evaluator, I teach, research, and apply analytic methods. I help people identify, understand, and assess problems and solutions. This way of knowing and working is threatened by “problems” and “solutions” that aren’t based on systematic and sound analysis, but are simply sound bites that stir emotion. We need evaluation capacity building more than ever! The more people understand and value analytic methods, the less acceptable fake news, fake problems, fake solutions, and alternative facts will be. It’s a bird…it’s a plane…it’s Evaluation Capacity Building! ecb-hero-2


Leave a comment

Did you know that evaluability assessment aligns well with systems thinking and addressing program complexity?

Although evaluability assessment was introduced in the 1970s as a pre-evaluation activity to determine if a program was ready for outcome evaluation, it has evolved into an approach that can be used at any point in a program’s lifecycle and is no longer exclusively tied to quantitative outcome designs.

Tenets that are central to evaluability assessment are also central to addressing program complexity: stakeholder involvement, multiple perspectives, and feedback.

Mike Trevisan and I will present a session at next week’s American Evaluation Association conference, “Using Evaluability Assessment to Address Program Complexity,” in which we’ll describe some specific strategies that reinforce the centrality of stakeholder involvement, multiple perspectives, and feedback in evaluability assessment work. This will include discussion of case examples where these strategies were used.

We hope to see you there!


Leave a comment

Evaluation Approaches: You Don’t Need to Choose Just One

by Bernadette Wright, Guest Blogger

If you’re conducting an evaluation for your thesis or dissertation, I recommend Tamara Walser’s post, “Developing a Framework for a Program Evaluation Thesis or Dissertation,” here on Route One Evaluation. This informative piece highlights the essentials of graduate school writing on evaluation. I’d just like to expand on one thing, based on my evaluation experience.

You don’t have to choose one evaluation approach to follow like a template. Indeed, one way to demonstrate your competency as a researcher is to show your understanding of multiple methods by combining them. Yes: you can develop your own approach to fit your project, as long as you explain what you are doing to conduct the evaluation and how your approach is based on effective practices.

Off-the-Shelf Approaches Don’t Always Fit

People sometimes use different words to describe the same thing. Approach, design, or method, all may be used to describe your unique framework that guides all aspects of how you’ll go about conducting your research.

Within that overarching framework, you can choose from a bunch of theoretical models/approaches. You can find many examples in books, articles, and websites such as the BetterEvaluation, Research Methods Knowledge Base, and Free Resources for Program Evaluation and Social Research Methods websites. Some approaches that evaluators frequently use are case study, mixed methods, qualitative evaluation, field experiment, summative evaluation, and formative evaluation.

When you get into the real world, however, you will find that program managers want answers to many questions at once. In order to get meaningful information to help answer all the questions, you often need to take what you learn from the existing approaches to design an approach to fit the specific situation.

Three Ways to Construct Your Own Evaluation Approach

Here are three ways that you can create your own evaluation approach to fit the context and questions at hand.

Using Multiple Approaches

For my own dissertation, I followed the excellent advice of my advisor, Dr. Marv Mandell, for studying dialogue/action circles program on racism in the Baltimore Region. Instead of specifying a particular “approach,” I described what I would do to answer the questions and how what I was doing was based on effective evaluation practices. I guess you could call it a kind of “two-phase approach.”

  • In the first phase, I used what you could call a qualitative approach. I conducted individual and focus group interviews with stakeholders to identify evaluable goals that were relevant to stakeholders. This was based on general practices for conducting an effective evaluation, which recognize understanding the program you’re evaluating as an important step. (For more on this, see my recent com article on Understanding Your Program.)
  • In the next phase, I examined the impacts of the circles in terms of the goals identified in phase one. For that phase, I used what you could call a mix of two research methods. I 1) analyzed qualitative data from participant feedback forms and 2) conducted a case study of two circles. The case study involved a mix of unobtrusive observation and interviews with participants.

It was a lot of work, but I believe I got more useful results—and learned a lot more—than if I’d just followed one off-the-shelf approach.

For other evaluations that I’ve conducted and managed for customers since, we’ve used different approaches and methods and done things in a different order. Because each evaluation should fit the timeline, scope, and focus of the unique situation.

Using Parts of Different Approaches

As with most evaluations that I’ve been involved with, the evaluation that I conducted for my dissertation used parts of what evaluators often call “impact evaluation,” “process evaluation,” and “formative evaluation.” I examined what impacts the circles had, the goals and the most helpful and least helpful parts of the program, and stakeholders’ recommendations for forming future changes to the program.

Create a Customized Evaluation Approach

It is even possible to create a completely new approach. For his dissertation, my business partner, Steve Wallis, ended up inventing a new approach to evaluating and integrating theories, which led to the technique of Integrative Propositional Analysis. The technique builds on related streams of research across philosophy, studies on conceptual systems and systems thinking, and the technique of Integrative Complexity.

The same kinds of innovation are certainly possible for evaluating programs and policies.

Devise Your Own Evaluation Approach

Design a customized approach for the specific situation of your study. Remember, you are the expert on your specific study. And that’s what experts do. When you use a custom-fit approach, you’re more likely to get evaluation results that are highly useful for benefiting the program and the people it serves and filling knowledge gaps in your field.

Bernadette Wright, PhD is Director of Research and Evaluation at Meaningful Evidence, LLC, where she helps non-profit organizations to get the meaningful and reliable research they need to increase their program results and make a bigger difference.

bernadettew


Leave a comment

What do you want to know from program evaluation?

Program evaluation has evolved to serve many purposes. A distinction of basic purposes is well-captured by the concepts of allocative and operational efficiency described by Donaldson and Gerard in their 2005 book on healthcare economics.

My use of their work in program evaluation boils down to two questions:

  • Is this worthwhile to do?
  • How can results be maximized?

You can learn a lot from program evaluation during program development and throughout implementation by considering these two questions.