During my graduate studies, I had the chance to learn more about how coaches can collect data and how to effectively analyze it. To get some hands-on experience, I conducted a program evaluation of SPU’s Digital Education Leadership graduate surveys. Students complete two surveys: the End of Program survey and the Completer’s survey. The End of Program survey is sent to students before graduation asking them to evaluate various aspects of the program. The Completer’s survey is sent 6 months after graduation and assesses if the DEL program has equipped students to carry out new roles and responsibilities that pertain to digital education. The goal of my program evaluation was to ask in what ways, if any, do the surveys need to be amended? A multi-informant approach was used, soliciting feedback from the program chair, the assessment director, and past DEL graduates. I collected quantitative and qualitative data through an electronic survey and a follow-up Zoom discussion. I analyzed the results by coding the Likert scales, checking for outliers, and identifying themes in the written and verbal responses. After evaluating the results I proposed several changes be made to the graduate surveys including:
- Reword various questions for clarity
- Change the Likert Scales to collect more helpful information
- Update ISTE standards to match the new Coaching Standards
- Add a new section, Program Structure, which evaluates how the program is structured and how the content is delivered instead of just asking if objectives were met. This will give more valuable feedback to faculty and the Program chair on their teaching methods.
This process really stretched me and helped me grow as a coach. I had never conducted a program evaluation before and learned a lot through the experience and by reading the book Small-Scale Evaluation, by Colin Robson. I understand how important program evaluations are to a digital learning coach. Instructional technology is rapidly evolving. Many schools are pioneering new ways to leverage technology to improve student learning. We must take the time to evaluate if our technology and programs are in fact working as we hoped they would. Program evaluations give voice to different stakeholders and can help determine the next steps.EDTC-6106-Program-Evaluation-_-Mumley-1
To read more about my work with this standard, you can use the drop-down menu above or the buttons below to navigate to a specific performance indicator.
Robson, C. (2017). Small-Scale Evaluation: principles and practice. SAGE Publications.
ISTE Standards for Coaches (n.d.). Retrieved from: https://www.iste.org/standards/for-coaches