1039 - Learning to Love Your Logic Model: Better Planning, Implementation, and Evaluation through Program Roadmaps.
Stream:
Wednesday, October 23, 2024
11:30 AM - 2:15 PM PST
Location: C125-126
Abstract Information: The bad rap on logic models in some quarters is well-deserved. What should be a flexible and practical tool often deteriorates into overly-bureaucratic mandatory templates mired in terminology that puzzles both users and all but the most experienced evaluators. This course aims to recapture the original spirit and utility of logic modelling by emphasizing function over form. While we will cover the "usual suspect" components of the traditional logic model—activities and outcomes, inputs and outputs, mediators and moderators--we’ll introduce concepts step by step and, at each point, show how insights from that step contribute (OR NOT) to a more thorough understanding of your program. More importantly, we’ll show how logic models--customarily a tool in program evaluation—are even more useful in setting, assessing, and course-correcting strategy and implementation, even before the first iota of data are collected. These “process use” applications, while not denying the importance of logic models in setting an evaluation focus, excite planners and implementers, and make the evaluator a welcome participant even at the earliest stages of program formation.
Relevance Statement: In the late 1990s there was a mini-"crisis of confidence" in evaluation as many experts and thought leaders concluded simultaneously that most evaluations--whether methodologically rigorous or not--were not being used (whether to improve, cancel, or expand the program). Utility/utilization focused evaluation became the order of the day. And key to effective utility/use is developing clarity and consensus on the description of the PROGRAM (NOT the evaluation). In the past this was often lacking and evaluation focus/questions might be decided while underneath the surface there were competing theories of change. Logic models or their equivalent became a potentially practical tool for guiding these discussions, engaging stak3eholders, and ensuring that all players had the same theory about the program BEFORE jumping into the discussion of evaluation focus, data collection methods, and analytic approaches. Related to this, evaluators constantly complained that they were brought into program discussions too late in the process and they needed to be part of the discussion from the start. Because, in this class, we pitch logic models as a tool for strategic planning and implementation QUITE APART from a logic model's use as an evaluation tool, evaluators are poised to offer added value at the very start of strategizing. This too is essential if we want to ensure that evaluation is a key contributor to continuous program improvement.