Category: Etc.

Humanizing Program Goals

I regularly teach a graduate course in educational program design and evaluation. One thing we work on is developing program goals. Understandably, students often default to developing goals similar to those they’ve seen before—that aren’t goals at all—that are actually metrics. They articulate goals as increases in standardized test scores or the number of books in the library or the attendance at parent nights…They mostly articulate goals as increases in standardized test scores.

I ask them if they chose education as a career because they wanted to increase standardized test scores. Of course not! *eyes roll*

Then we talk about all the great, valuable, powerful reasons they decided to teach and become educational leaders. This is the stuff program goals are made of.

On a recent flight I watched a TED Talk by Tristan Harris, How Better Tech Could Protect Us from Distraction. Tristan discourages us from typical design goals and encourages us to consider deeper human design goals that articulate our values. This resonated with me, capturing what I’ve been pushing with students, colleagues, and clients. Design goals, program goals, policy goals, product goals…rooted in human values.

Evaluation Capacity Building as Superhero

As a program evaluator, I teach, research, and apply analytic methods. I help people identify, understand, and assess problems and solutions. This way of knowing and working is threatened by “problems” and “solutions” that aren’t based on systematic and sound analysis, but are simply sound bites that stir emotion. We need evaluation capacity building more than ever! The more people understand and value analytic methods, the less acceptable fake news, fake problems, fake solutions, and alternative facts will be. It’s a bird…it’s a plane…it’s Evaluation Capacity Building! ecb-hero-2

Evaluation Approaches: You Don’t Need to Choose Just One

by Bernadette Wright, Guest Blogger

If you’re conducting an evaluation for your thesis or dissertation, I recommend Tamara Walser’s post, “Developing a Framework for a Program Evaluation Thesis or Dissertation,” here on Route One Evaluation. This informative piece highlights the essentials of graduate school writing on evaluation. I’d just like to expand on one thing, based on my evaluation experience.

You don’t have to choose one evaluation approach to follow like a template. Indeed, one way to demonstrate your competency as a researcher is to show your understanding of multiple methods by combining them. Yes: you can develop your own approach to fit your project, as long as you explain what you are doing to conduct the evaluation and how your approach is based on effective practices.

Off-the-Shelf Approaches Don’t Always Fit

People sometimes use different words to describe the same thing. Approach, design, or method, all may be used to describe your unique framework that guides all aspects of how you’ll go about conducting your research.

Within that overarching framework, you can choose from a bunch of theoretical models/approaches. You can find many examples in books, articles, and websites such as the BetterEvaluation, Research Methods Knowledge Base, and Free Resources for Program Evaluation and Social Research Methods websites. Some approaches that evaluators frequently use are case study, mixed methods, qualitative evaluation, field experiment, summative evaluation, and formative evaluation.

When you get into the real world, however, you will find that program managers want answers to many questions at once. In order to get meaningful information to help answer all the questions, you often need to take what you learn from the existing approaches to design an approach to fit the specific situation.

Three Ways to Construct Your Own Evaluation Approach

Here are three ways that you can create your own evaluation approach to fit the context and questions at hand.

Using Multiple Approaches

For my own dissertation, I followed the excellent advice of my advisor, Dr. Marv Mandell, for studying dialogue/action circles program on racism in the Baltimore Region. Instead of specifying a particular “approach,” I described what I would do to answer the questions and how what I was doing was based on effective evaluation practices. I guess you could call it a kind of “two-phase approach.”

  • In the first phase, I used what you could call a qualitative approach. I conducted individual and focus group interviews with stakeholders to identify evaluable goals that were relevant to stakeholders. This was based on general practices for conducting an effective evaluation, which recognize understanding the program you’re evaluating as an important step. (For more on this, see my recent com article on Understanding Your Program.)
  • In the next phase, I examined the impacts of the circles in terms of the goals identified in phase one. For that phase, I used what you could call a mix of two research methods. I 1) analyzed qualitative data from participant feedback forms and 2) conducted a case study of two circles. The case study involved a mix of unobtrusive observation and interviews with participants.

It was a lot of work, but I believe I got more useful results—and learned a lot more—than if I’d just followed one off-the-shelf approach.

For other evaluations that I’ve conducted and managed for customers since, we’ve used different approaches and methods and done things in a different order. Because each evaluation should fit the timeline, scope, and focus of the unique situation.

Using Parts of Different Approaches

As with most evaluations that I’ve been involved with, the evaluation that I conducted for my dissertation used parts of what evaluators often call “impact evaluation,” “process evaluation,” and “formative evaluation.” I examined what impacts the circles had, the goals and the most helpful and least helpful parts of the program, and stakeholders’ recommendations for forming future changes to the program.

Create a Customized Evaluation Approach

It is even possible to create a completely new approach. For his dissertation, my business partner, Steve Wallis, ended up inventing a new approach to evaluating and integrating theories, which led to the technique of Integrative Propositional Analysis. The technique builds on related streams of research across philosophy, studies on conceptual systems and systems thinking, and the technique of Integrative Complexity.

The same kinds of innovation are certainly possible for evaluating programs and policies.

Devise Your Own Evaluation Approach

Design a customized approach for the specific situation of your study. Remember, you are the expert on your specific study. And that’s what experts do. When you use a custom-fit approach, you’re more likely to get evaluation results that are highly useful for benefiting the program and the people it serves and filling knowledge gaps in your field.

Bernadette Wright, PhD is Director of Research and Evaluation at Meaningful Evidence, LLC, where she helps non-profit organizations to get the meaningful and reliable research they need to increase their program results and make a bigger difference.

bernadettew

What do you want to know from program evaluation?

Program evaluation has evolved to serve many purposes. A distinction of basic purposes is well-captured by the concepts of allocative and operational efficiency described by Donaldson and Gerard in their 2005 book on healthcare economics.

My use of their work in program evaluation boils down to two questions:

  • Is this worthwhile to do?
  • How can results be maximized?

You can learn a lot from program evaluation during program development and throughout implementation by considering these two questions.

Why Conduct a Program Evaluation Thesis or Dissertation?

In my first blog post, I introduced a framework for conducting a program evaluation thesis or dissertation. I’m an advocate for these studies, particularly for students in professional practice degree programs—the very students who can use program evaluation to benefit their workplaces. Many will be expected to do so.

There are three main challenges to conducting a program evaluation thesis or dissertation. On the bright side, these challenges provide opportunities to move the discipline of program evaluation forward and impact positive change.

Challenges

Lack of understanding of program evaluation among faculty and evaluation clients/stakeholders: Although faculty know how to conduct research, many have limited understanding of program evaluation. Similarly, the client and/or key program contacts for whom the evaluation is being conducted may lack understanding of program evaluation.

Lack of understanding of program evaluation among students: Students who aren’t in program evaluation degree programs typically have limited, if any, coursework in program evaluation. They take courses in research methods where program evaluation may be briefly covered, often mistakenly, as a type of research. Some will take a dedicated course in program evaluation, or maybe even two, if available and encouraged.

Acceptability of a program evaluation dissertation: Some programs and faculty don’t agree that conducting an applied study, such as a program evaluation, is appropriate for a dissertation. This is often related to the first listed challenge and misunderstandings about program evaluation. Truth is, a quality program evaluation is often more difficult to pull off than a quality research study. It requires strong technical skills AND strong non-technical skills. It also requires additional sections in the dissertation…for example, to discuss stakeholder involvement and standards of quality program evaluation.

Opportunities

Building evaluation capacity: When students conduct program evaluation thesis and dissertation studies, students, faculty, and evaluation clients and stakeholders learn about program evaluation and its applicability to their fields.

Contributing to local and academic knowledge: Most agree that a quality program evaluation contributes local knowledge that can be applied directly to decisions about program improvements, expansion, etc. If evaluation findings are interpreted in the context of other relevant studies, a program evaluation thesis or dissertation can also contribute to academic knowledge. Further, students can investigate an aspect of evaluation methodology while conducting the study; thus, also contributing academic knowledge about program evaluation.

Promoting program evaluation: By building evaluation capacity and contributing to local and academic knowledge through program evaluation thesis and dissertation studies, we promote the discipline. Perhaps best said by Michael Morris (1994) in his article on the single course in program evaluation

Although a little knowledge can be a dangerous thing, program evaluation is a field in which total ignorance is much worse. Evaluation is most likely to achieve its dual goals of demonstrating scientific credibility and bettering the human condition in an environment where it is not just the professional evaluation community that has access to relevant knowledge and skills (p. 57).

Additional Reading…

Framework for Conducting a Program Evaluation Thesis or Dissertation

An article Mike Trevisan and I wrote about conducting evaluability assessment thesis and dissertation studies—although the focus is on evaluability assessment, the implications are relevant to program evaluation theses and dissertations in general.