Informal Evaluations
In my last post, I talked about the final class of my new course “Between Animals and Gods” and promised to say more about the informal student evaluations I give at the end of courses.
Like students everywhere, mine take official online course evaluations before they access their final grades. At Georgia State, the College of Arts and Sciences administers a standard list of 17 questions with space at the end for typed comments. Students in courses on Religious Studies, English, Chemistry, History, Psychology, and Physics take the same course evaluations. As you might expect, the evaluations focus on the instructor’s activities in the course: “explained the goals of this course clearly,” “gave assignments related to the goals of this course,” “was well prepared,” “responded constructively and thoughtfully to questions and comments,” “assigned grades fairly,” “stimulated my thinking and gave me new insights into the subject,” and “related well to students.” The final question—visually removed from #1-16 by several blank lines—asks students to assess the course as a whole: “Considering both the limitations and possibilities of the subject matter and course, how would you rate the overall teaching effectiveness of the instructor?” In a perfect world, departments or instructors could customize (a portion of) their end of course evaluations so that they could more closely measure the course in its field.
In my imperfect world, I supplement the official evaluation with an informal and voluntary evaluation that I hand out and collect in the final class. Most of the questions for this course’s evaluation asked for specific feedback about the ways we used class time, the assignments, and the roles various technologies played in students’ learning. You can see, however, that I started the evaluation with a few questions about the readings. First, I asked students to report what percentage of the assigned reading they had completed.
I ask them about the reading because I want them to take a moment to assess their own investment in their learning. When I have a sense of how much a student has read, I feel like I have a better sense of how to read the rest of the evaluation. For example, undergraduates in this course self-reported reading between 25% and 90% of the assigned readings. (The average was 65%.) Reviewing the informal—and even the official evaluations—with this information in mind changes how I interpret them. Their responses to the next two questions—about the hurdles they faced in preparing for class and the appropriateness of the readings—will factor into my planning a future version of this class.
Informal evaluations also give me an opportunity to ask specific questions about students’ experience of the course. For instance, this course included field trips, guest speakers, a writing consultant, and use of a class set of iPads. Each of these “extras” created additional work for me, and so if students found them enlightening or pointless, I want to know. (Most appreciated having access to an iPad with the apps they needed to complete their assignments, and several mentioned how much they learned from guest speakers and our visit to the Carlos Museum.)
In addition to helping me fine-tune the class in the future, these evaluations also give me a more detailed sense of the students’ (unmet) expectations. For example, the graduate students wanted more lectures (less seminar-style discussions) and the undergraduates wished they could have chosen their own podcast groups. Some graduate students wanted to know the final reflection assignment on day one; one said it would have affected how carefully she/he read and took notes. All of these are ideas I can consider as I reflect on how I will teach the course differently.
Finally, these evaluations are a handwritten record of students’ thoughts about the class. They’re part of my course archive: they join my draft syllabi, reading notes, lesson plans, emails, and this blog as a record of my first time teaching “Between Animals and Gods.”