Evaluating Teachers and Teaching in a Global Pandemic:
Resources for WFU Schools and Departments
These guidelines, prepared by the Center for the Advancement of Teaching and the Office of Online Education, are meant to serve as a resource for units making decisions about how they will evaluate teaching in the fall. Many units have asked for evidence-informed guidance and examples of models they might adopt as they rethink their practices this year. What follows is a response to that request, and only that request. Neither office wishes to dictate processes to units who have the best understanding of their culture, their faculty, and their disciplinary standards.
- Evaluate the Whole Teacher, not a Specific Teaching Moment
In a moment when otherwise wonderful teachers are experimenting with new teaching strategies, and teaching in challenging environments that are beyond their control, it is worth considering whether effectiveness in the moment should be weighed as heavily as their potential and/or general approach to teaching in this moment (e.g., how they prepared and how they learned from strategies that didn’t go as planned).
- Ask Students and Observers to Describe, not Evaluate
Students and peers can provide useful information to be used in an evaluative process, but this information should not be a substitute for that process. By collecting only descriptive information, the evaluator(s) are responsible for determining whether standards have been met, NOT the students or observers.
- The More Sources, the Better
No single source of evidence tells us everything we want to know about a teacher and their teaching. We also know that many traditionally used sources of evidence are subject to small, but systematic, biases unrelated to teaching effectiveness. To overcome these challenges, evaluators should collect and synthesize multiple sources of evidence, each with their unique insight to the teacher and their teaching.
- Teachers are an Important, if not Essential, Source of Evidence
While there are real challenges with evaluative processes that rely on self-assessment alone, no one is better positioned to provide insight into the details of their teaching than the teacher themself. Their structured self-reflection can provide insight into how they approach their teaching and they can provide essential context for those interpreting other forms of evidence (e.g., student feedback or peer observations). This is especially important in a semester when context will be particularly determinative of results.
- If Specific Behaviors Matter, Specify them In Advance
If you decide to evaluate teaching effectiveness, and define teaching effectiveness in terms of specific teacher behaviors (e.g., organization, clarity, rapport), you should specify those behaviors as early in the semester as possible. Although there is rarely agreement about which behaviors matter, there will be even less agreement when instructors are teaching in new modalities (e.g., if synchronous learning experiences are important to your department, make that explicit before the semester begins). Specifying what you care about early on will ensure that instructors have ample opportunities to meet and exceed your expectations before they are evaluated.
- If Student Outcomes Matter, Measure them Directly
If you decide to evaluate teaching effectiveness, and believe student learning is the definition of effectiveness, do not assume that student ratings can serve as a reasonable proxy for student learning. The research on student ratings suggests there may be a correlation between ratings and learning. But even in the best case scenario, the correlation is too small to allow evaluators to make accurate judgments about individual instructors. Direct evidence of student learning (e.g., student work collected as part of a teaching portfolio) is the best way to collect evidence of this effect.
- Evidence Matters. Interpretation Matters More.
The success of any evaluative model depends on the ability of the evaluators to correctly understand, interpret, and apply the evidence before them. This means that less-than-ideal evidence can be put to good use if properly contextualized, but also that carefully collected evidence can lead to less-than-ideal outcomes if evaluators do not understand what their evidence is actually communicating. Before you make changes to the evidence you collect, consider whether your time might be better spent making changes to the process you use to interpret that evidence first.
- Use Separate Processes for Separate Purposes
There are many reasons institutions, departments, and individual instructors collect student feedback. Using the same survey to meet all of these needs can undermine the goals of each. If you want to collect data about student experiences and preferences at the aggregate level, separate that process for the data you collect for the summative evaluation of teachers. Likewise, instructors should be able to collect feedback for their own purposes that is not used in the formal evaluative process.
Potential Standards for Comprehensive Evaluation
Borrowed and Adapted from Follmer Greenhoot, A., Ward, D., & Bernstein, D. (2017). Benchmarks for Teaching Effectiveness.
- Student Learning
Did students achieve the learning outcomes of the course or experience significant growth? Were all students successful or particular subgroups?
- Course Design
Were the goals for the course appropriate for the discipline, course, and level of students? Did the assessments provide reasonable evidence of whether those goals were achieved? Did the course activities prepare students for their assessments by helping them master the course goals? Were they evidence-informed?
- Teaching Practices
Did the instructor incorporate evidence-informed strategies that facilitated engagement with the material, other students, and the instructor? Was the instructor enthusiastic, organized, and clear? Did they set high expectations and build rapport with students?
- Course Climate
Does the instructor foster a classroom climate that is respectful and supportive of diverse learners? Do students feel comfortable engaging in class, collaborating with peers, and seeking out support when necessary?
- Reflective and Iterative Growth
Is the instructor curious about student learning, eager to experiment, and willing to adapt in light of what they learn? Do they seek out new knowledge about teaching and regularly participate in teaching development programs?
- Mentoring and Advising
Does the instructor support students and student development beyond their class? Do they initiate and sustain mentorship relationships with multiple students? Do they approach their advising duties as if they were an extension of their teaching?
- Involvement in Teaching Service
Does the instructor contribute to department or university initiatives that advance the teaching mission of the institution? Do they lead professional development initiatives at the department or university level?
- Contribution to the Scholarship of Teaching & Learning
Does the instructor advance our knowledge of teaching and learning by contributing to the scholarship of teaching and learning?
Potential Sources of Evidence
- Self-Assessment/Teaching Narrative
- Teaching Materials
- Peer-Review of Teaching Materials
- Student Learning Outcomes
- Student Feedback
- Teaching Recognition
- Teaching Portfolios
Sample Rubrics, Checklists, and Surveys
Sample Item Bank
The following is a list of sample response items drawn from the instruments above, as well as question banks in Kember, D., & Ginns, P. (2012). Evaluating Teaching and Learning: A Practical Handbook for Colleges, Universities and the Scholarship of Teaching. Routledge; and Chism, N. V. N., & Chism, G. W. (2007). Peer review of teaching: A sourcebook (Second edition.). Anker.
Arreola, R. A. (2000). Developing a comprehensive faculty evaluation system: A handbook for college faculty and administrators on designing and operating a comprehensive faculty evaluation system (Second edition.). Anker Pub. Co.
Centra, J. A. (1993). Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness (1st edition). Jossey-Bass.
Chism, N. V. N., & Chism, G. W. (2007). Peer review of teaching: A sourcebook (Second edition.). Anker Pub. Co.
England, J., Hutchings, P., & McKeachie, W. J. (1996). The professional evaluation of teaching. American Council of Learned Societies.
Kember, D., & Ginns, P. (2012). Evaluating Teaching and Learning: A Practical Handbook for Colleges, Universities and the Scholarship of Teaching. Routledge.
Seldin, Peter, J. Elizabeth Miller, and Clement A. Seldin. The Teaching Portfolio: A Practical Guide to Improved Performance and Promotion/Tenure Decisions. Fourth edition /. Jossey-Bass Higher and Adult Education Series. San Francisco: Jossey-Bass, 2010.
Tobin, T. J., Mandernach, B. J., & Taylor, A. H. (2015). Evaluating Online Teaching: Implementing Best Practices (1 edition). Jossey-Bass.