Abstract Information: The workshop will consist of 5 mini-modules introducing participants to methods for integrating generative AI in their evaluation practice. Modules include 1) a primer on generative AI and its use in evaluation; 2) ethical and responsible principles for GenAI-enabled evaluation practice; 3) prompt engineering basics; 4) chatbots for theory-based evaluation; 5) AI-assisted multi-method analysis. Sessions will include lectures, practical demonstrations, interactive activities, and large-group discussions.
Relevance Statement: Artificial intelligence has and will continue to augment the landscape of knowledge work—including program evaluation. This workshop equips participants with entry-level knowledge and practical skills to conduct evaluations in the age of AI and practice AI-enabled evaluation. As AI technology advances, evaluators must develop fundamental AI literacy and expand their evaluation toolbox to remain relevant and competitive in the evaluation marketplace. Further, AI has the potential to translate to efficiency and effectiveness gains in evaluation processes and products—if integrated responsibly and thoughtfully. This workshop will provide participants with basic premises and principles for a baseline level of AI-enable evaluation capacity. This workshop is for evaluation practitioners, managers, commissioners, and other MERL practitioners who have been or would like to integrate various AI tools, techniques, and tips into their evaluation practice.