Route One Evaluation

Leave a comment

Did you know that evaluability assessment aligns well with systems thinking and addressing program complexity?

Although evaluability assessment was introduced in the 1970s as a pre-evaluation activity to determine if a program was ready for outcome evaluation, it has evolved into an approach that can be used at any point in a program’s lifecycle and is no longer exclusively tied to quantitative outcome designs.

Tenets that are central to evaluability assessment are also central to addressing program complexity: stakeholder involvement, multiple perspectives, and feedback.

Mike Trevisan and I will present a session at next week’s American Evaluation Association conference, “Using Evaluability Assessment to Address Program Complexity,” in which we’ll describe some specific strategies that reinforce the centrality of stakeholder involvement, multiple perspectives, and feedback in evaluability assessment work. This will include discussion of case examples where these strategies were used.

We hope to see you there!

Leave a comment

Evaluation Approaches: You Don’t Need to Choose Just One

by Bernadette Wright, Guest Blogger

If you’re conducting an evaluation for your thesis or dissertation, I recommend Tamara Walser’s post, “Developing a Framework for a Program Evaluation Thesis or Dissertation,” here on Route One Evaluation. This informative piece highlights the essentials of graduate school writing on evaluation. I’d just like to expand on one thing, based on my evaluation experience.

You don’t have to choose one evaluation approach to follow like a template. Indeed, one way to demonstrate your competency as a researcher is to show your understanding of multiple methods by combining them. Yes: you can develop your own approach to fit your project, as long as you explain what you are doing to conduct the evaluation and how your approach is based on effective practices.

Off-the-Shelf Approaches Don’t Always Fit

People sometimes use different words to describe the same thing. Approach, design, or method, all may be used to describe your unique framework that guides all aspects of how you’ll go about conducting your research.

Within that overarching framework, you can choose from a bunch of theoretical models/approaches. You can find many examples in books, articles, and websites such as the BetterEvaluation, Research Methods Knowledge Base, and Free Resources for Program Evaluation and Social Research Methods websites. Some approaches that evaluators frequently use are case study, mixed methods, qualitative evaluation, field experiment, summative evaluation, and formative evaluation.

When you get into the real world, however, you will find that program managers want answers to many questions at once. In order to get meaningful information to help answer all the questions, you often need to take what you learn from the existing approaches to design an approach to fit the specific situation.

Three Ways to Construct Your Own Evaluation Approach

Here are three ways that you can create your own evaluation approach to fit the context and questions at hand.

Using Multiple Approaches

For my own dissertation, I followed the excellent advice of my advisor, Dr. Marv Mandell, for studying dialogue/action circles program on racism in the Baltimore Region. Instead of specifying a particular “approach,” I described what I would do to answer the questions and how what I was doing was based on effective evaluation practices. I guess you could call it a kind of “two-phase approach.”

  • In the first phase, I used what you could call a qualitative approach. I conducted individual and focus group interviews with stakeholders to identify evaluable goals that were relevant to stakeholders. This was based on general practices for conducting an effective evaluation, which recognize understanding the program you’re evaluating as an important step. (For more on this, see my recent com article on Understanding Your Program.)
  • In the next phase, I examined the impacts of the circles in terms of the goals identified in phase one. For that phase, I used what you could call a mix of two research methods. I 1) analyzed qualitative data from participant feedback forms and 2) conducted a case study of two circles. The case study involved a mix of unobtrusive observation and interviews with participants.

It was a lot of work, but I believe I got more useful results—and learned a lot more—than if I’d just followed one off-the-shelf approach.

For other evaluations that I’ve conducted and managed for customers since, we’ve used different approaches and methods and done things in a different order. Because each evaluation should fit the timeline, scope, and focus of the unique situation.

Using Parts of Different Approaches

As with most evaluations that I’ve been involved with, the evaluation that I conducted for my dissertation used parts of what evaluators often call “impact evaluation,” “process evaluation,” and “formative evaluation.” I examined what impacts the circles had, the goals and the most helpful and least helpful parts of the program, and stakeholders’ recommendations for forming future changes to the program.

Create a Customized Evaluation Approach

It is even possible to create a completely new approach. For his dissertation, my business partner, Steve Wallis, ended up inventing a new approach to evaluating and integrating theories, which led to the technique of Integrative Propositional Analysis. The technique builds on related streams of research across philosophy, studies on conceptual systems and systems thinking, and the technique of Integrative Complexity.

The same kinds of innovation are certainly possible for evaluating programs and policies.

Devise Your Own Evaluation Approach

Design a customized approach for the specific situation of your study. Remember, you are the expert on your specific study. And that’s what experts do. When you use a custom-fit approach, you’re more likely to get evaluation results that are highly useful for benefiting the program and the people it serves and filling knowledge gaps in your field.

Bernadette Wright, PhD is Director of Research and Evaluation at Meaningful Evidence, LLC, where she helps non-profit organizations to get the meaningful and reliable research they need to increase their program results and make a bigger difference.


Leave a comment

What do you want to know from program evaluation?

Program evaluation has evolved to serve many purposes. A distinction of basic purposes is well-captured by the concepts of allocative and operational efficiency described by Donaldson and Gerard in their 2005 book on healthcare economics.

My use of their work in program evaluation boils down to two questions:

  • Is this worthwhile to do?
  • How can results be maximized?

You can learn a lot from program evaluation during program development and throughout implementation by considering these two questions.


Why Conduct a Program Evaluation Thesis or Dissertation?

In my first blog post, I introduced a framework for conducting a program evaluation thesis or dissertation. I’m an advocate for these studies, particularly for students in professional practice degree programs—the very students who can use program evaluation to benefit their workplaces. Many will be expected to do so.

There are three main challenges to conducting a program evaluation thesis or dissertation. On the bright side, these challenges provide opportunities to move the discipline of program evaluation forward and impact positive change.


Lack of understanding of program evaluation among faculty and evaluation clients/stakeholders: Although faculty know how to conduct research, many have limited understanding of program evaluation. Similarly, the client and/or key program contacts for whom the evaluation is being conducted may lack understanding of program evaluation.

Lack of understanding of program evaluation among students: Students who aren’t in program evaluation degree programs typically have limited, if any, coursework in program evaluation. They take courses in research methods where program evaluation may be briefly covered, often mistakenly, as a type of research. Some will take a dedicated course in program evaluation, or maybe even two, if available and encouraged.

Acceptability of a program evaluation dissertation: Some programs and faculty don’t agree that conducting an applied study, such as a program evaluation, is appropriate for a dissertation. This is often related to the first listed challenge and misunderstandings about program evaluation. Truth is, a quality program evaluation is often more difficult to pull off than a quality research study. It requires strong technical skills AND strong non-technical skills. It also requires additional sections in the dissertation…for example, to discuss stakeholder involvement and standards of quality program evaluation.


Building evaluation capacity: When students conduct program evaluation thesis and dissertation studies, students, faculty, and evaluation clients and stakeholders learn about program evaluation and its applicability to their fields.

Contributing to local and academic knowledge: Most agree that a quality program evaluation contributes local knowledge that can be applied directly to decisions about program improvements, expansion, etc. If evaluation findings are interpreted in the context of other relevant studies, a program evaluation thesis or dissertation can also contribute to academic knowledge. Further, students can investigate an aspect of evaluation methodology while conducting the study; thus, also contributing academic knowledge about program evaluation.

Promoting program evaluation: By building evaluation capacity and contributing to local and academic knowledge through program evaluation thesis and dissertation studies, we promote the discipline. Perhaps best said by Michael Morris (1994) in his article on the single course in program evaluation…

Although a little knowledge can be a dangerous thing, program evaluation is a field in which total ignorance is much worse. Evaluation is most likely to achieve its dual goals of demonstrating scientific credibility and bettering the human condition in an environment where it is not just the professional evaluation community that has access to relevant knowledge and skills (p. 57).

Additional Reading…

Framework for Conducting a Program Evaluation Thesis or Dissertation

An article Mike Trevisan and I wrote about conducting evaluability assessment thesis and dissertation studies—although the focus is on evaluability assessment, the implications are relevant to program evaluation theses and dissertations in general.



Developing a Framework for a Program Evaluation Thesis or Dissertation

I often chair or serve on committees of students conducting a program evaluation thesis or dissertation and am developing a framework to support faculty and students in this work. I introduced the framework in a presentation during the Graduate Student and New Evaluator business meeting at the 2015 American Evaluation Association conference in Chicago. The evolving framework includes six key points:

  • Consider the evaluation problem
  • Understand the program and program context
  • Involve stakeholders
  • Choose an evaluation approach
  • Establish evaluation quality
  • Get the evaluation used

The specifics of how to apply these when conducting a program evaluation thesis or dissertation study are available as resources on my website, Route One Evaluation. I welcome questions and feedback on the framework. My next blog post will focus on challenges and opportunities associated with program evaluation thesis and dissertation work.