As I teach a program evaluation course this semester, and as someone who began teaching program evaluation nearly 20 years ago (!), I find myself once again reflecting on how I define program evaluation and differentiate it from related fields and practices. My definitions and distinctions have certainly evolved over the years. In addition to commonly held definitions and distinctions, I have come to advocate that program evaluation requires big picture, evaluative questions to guide the work; data collection methods and sources chosen for the purpose of addressing those evaluation questions; and multiple sources of data. If these criteria are not met, it is not program evaluation. My evolving thoughts are captured in a document posted here. I would love your feedback!
Check out the latest addition to my Social Science Resources page, an organizing structure for addressing and assessing trust and applicability of research findings. This work began years ago as a handout I created for an introductory research methods course. It has evolved with my thinking as I continue to reflect on what trustworthy and applicable research means. It includes quantitative and qualitative research…bridging the literatures for a holistic approach. And, I advocate for researchers to use stereotypical qualitative strategies for supporting trust and applicability in quantitative research studies. I appreciate any and all feedback that I can incorporate to improve this work. Thank you in advance!