Resources

Like so many of us, I’ve spent the past two years in a paralyzed panic over artificial intelligence’s effects on my classroom. I teach undergraduates, mainly gen ed philosophy courses, and writing has been a key component of all my courses. When ChatGPT hit the mainstream, it became a constantly looming presence, threatening to devour every part of teaching that I care about. I didn’t “wrestle” with it. Nothing so active and dignified. I went on an emotional roller coaster of ignoring it, freaking out, wishing it away, catastrophizing, and then ignoring it again.It didn’t work. AI was still there. I tried writing about it, but that just made me feel worse. And my writing was awful, page upon page of “Oh my god, the sky is falling.” Depressing, unhelpful – and bad writing. I trashed every single page.Some of my colleagues argue that we must incorporate this wonderful new tool into our teaching. We should encourage students to use AI for “basic” tasks like summarizing texts and outlining arguments, freeing them up for more advanced work. Others point out that summarizing and outlining are advanced tasks for many of our students since they don’t know how to do either, and that students need to first acquire skills like summarizing in order to later acquire more advanced skills. To make that learning possible, they argue, we need to build protective walls to keep AI out of our classes. Several want our Writing Center to ban Grammarly and its ilk.I agree with the second group that our students usually don’t summarize or outline well. And I agree that allowing students to outsource tasks they haven’t yet mastered to AI will make it harder for them to learn to read, write and, most importantly, to think critically. I’d love to operate in a sheltered space behind protective walls. But I don’t think the walls will hold.Hence my freaking out. But after two years, I have finally managed a few moment of calm thought, aided by James Lang’s wonderful blog post. I’ve come to the following key conclusions:AI-assisted writing isn’t going away. Damn it.We aren’t reliable AI detectors and we don’t have reliable automated AI detectors (although we can catch blatant and unskilled uses).If we continue to assign take-home essays, some of our students will use AI to write them. We won’t know how many or how much they will use it, and we won’t catch many of them.Take-home essays are important pedagogical tools, and I don’t as yet have any promising substitutes.My immediate task is to figure out how to navigate my classroom spaces with all this and my own teaching goals in mind. What do I want to prioritize, and what am I willing to sacrifice?It is tempting to prioritize not being duped. And making not being duped the priority has the clear advantage of producing simple action steps: No more take-home essays. Switch to lockdown browsers or old-school blue book exams.Following James Lang, I am not switching, at least not yet. This is because I think there are more important things at stake than minimizing the risk of cheating.As I listen to colleagues who are switching to in-class exams, I am thinking about why I’ve been avoiding them for my entire teaching career: They do not test what I want to teach.Switching from essay-writing to in-class exams requires moving from messy and open-ended discussion towards lectures. I don’t want to make that move. My students have enough lecture classes. They don’t need another one from me. But they do need what I am good at teaching. My students need a class that focuses on discussion and self-reflection, inviting them to engage each other and the materials and think through their own lives, actions, and values. I want to teach those classes, and then I want my assessments to provide opportunities for students to chew over things we’ve talked about and the views they’ve encountered in class, developing arguments, reflecting on their experience, pursuing thoughts and objections, and seeing where it all takes them. Take home essays do that.But assigning those essays leaves me wide open to cheating. So what do I do in my classes to reduce the risk?I include more low-stakes writing.I make the papers worth less and include plenty of scaffolding and in-class work on them.I grade a little differently, rewarding bland, generic, but correct writing less and messy and creative writing more.I add some quizzes – and I am experimenting with using AI to draft multiple choice questions.I keep an eye out for obvious AI misuse and I use the built-in detection software. But I try not to obsess about it, and I try to be OK with knowing that some students will get away with things they shouldn’t (this part is definitely a work in progress).Most importantly, I try to connect with my students and I try to convince them that I want to hear what they think, and that their opinions matter to me and to the world. I encourage them to draw on class discussions and their own experiences when they write, and I encourage them to say what AI cannot say because AI is not them.I’m also looking around for guidance from others. Reading a Chronicle of Higher Education newsletter, I just came across Kimberly Kirner’s writing assessments. She sets out to help her students develop their own voices, and she grades based on the students’ progress towards goals that they develop together. I plan to learn from Kirner and others like her over the summer and experiment with her assignments next semester.AI is here to stay and our students have access to it. It’s not the situation I would have chosen but it is what is in front of us. It will be on us as educators to guide students so that they can still develop as critical thinkers and writers. That work has many parts, and thankfully we don’t all have to do all of it. Despite the peptalks from the AI-optimists on my campus, I don’t see myself working with students to help them write better AI prompts, and I don’t yet see a good role for AI in my courses. But reading Kirner and Lang reminds me that there is important work here that I am suited for and that I care about: I can help students see that they and their voice matters and I can help them develop their voices and become better informed so that they can speak and write more effectively. Notes & BibliographyKimberly Kirner is Professor of Anthropology at California State University at Northridge.James Lang is Professor of Practice at the Kaneb Center for Teaching Excellence at the University of Notre Dame.

Recently I attended the Wabash Center’s Curiosity Roundtable, where we heard from Dr. Iva Carruthers in one session. Her presentation was titled “AI and Ubuntu in the Age of Metanomics.” She had us thinking about what it means to be human and how we talk about humanity in this new age of AI—in all its forms—and what theology has to offer and how different sources of knowledge, different intelligences, all contribute to our being. Is being human about knowledge or about wisdom? About thinking or about relationship? It was a rich conversation that didn’t once bring up how we deal with issues of students using ChatGPT in class.As I thought about our prompt—what do I do with this conversation when I return to my institution?—my initial response was: resist the AI! And then I thought more deeply. The question is really how to ground ourselves more deeply in what it means to be human. The short answer is that we engage more in the world and with each other, but how do I do that? How do I help my students to do that?Unsurprisingly, my answer is to spend more time outdoors together. So now I have another reason in my backpack to use evangelizing for outdoor teaching. Hear me out.The best teaching happens outdoors because it’s a broader sense of teaching than mere lecture content. It’s the things I’ve been talking about in this blog. Students are more likely to play outdoors because they feel a freedom in the wind and the sun and “getting away with” not being “in class” as they’ve always understood classrooms. Play is a deeply important part of learning to be human. Children play at being adults long before they are adults, and the play, which is about imitation and experimentation in spaces of controlled risk, develops the skills of adulthood in the child. It’s similar for students. They play with ideas—imitate and experiment in a low-risk space—and so, grow into their understandings.In addition to content, they play with, students play with each other more readily outdoors. The freedom of movement makes getting into groups easier as well as interaction with group members. They sit closer together and find themselves more present to one another when they only have to focus on each other and the space—with its greens and blues, its warmth and wind—is calmer and less distracting than any video screen. Longer immersive classes do this even more (see my previous posts on the way immersive classes facilitate presence and community), but even shorter classes outside the normal environment will help students see one another as humans and create bonds.Play is also, as I understand it, an important part of learning to be an animal (see this chapter by Kay Redfield Jamison: “Playing Fields of the Mind” in her Exuberance: The Passion for Life [New York: Alfred A. Knopf, 2004]). So, we learn to be more human and at the same time become more connected to other animals who also play, being reminded that we are part of creation. Along with this, when they are outside students are more immersed in the material world, and their phones are less attached to them. They are distracted by more interesting and more real things than whatever is on their screens. When people have a greater immersion in the real world, they gain more ability to discern the fake aspects of AI because they know the real thing.When students have to work together, especially in an immersive trip where they depend on each other physically (like on a wilderness trip), they learn what real friendship and connectedness look like and perhaps can distinguish the real from the fake in virtual worlds. In a good outdoor class—or a good indoor class that requires students to work together to create something—they learn what humanity looks like in all kinds of forms beyond what AI with its implicit biases is telling them. They learn empathy and compassion and relationship, the stuff that makes human beings human and which AI can only “know” about, or at best imitate. These are the things teaching outdoors and prioritizing interactions with the material world and with real people unmediated by screens does. My version is outdoor teaching, and I won’t stop evangelizing for it, but we can just as easily think of this as out-of-the-classroom teaching. Any place where we can encourage (or require) students to engage their worlds and the people in them is a place we are saying that our AI world is not the final word. Requiring some community engagement as part of the class or a museum visit or a technology fast or a group project that must be done only in person—all of these encourage play and presence and learning to distinguish reality from virtual reality. And if our clergy and theologians were trained this way, what a real world we might have. May it be so.

Just as we are gaining aplomb in maneuvering all of the bells and whistles of Zoom, Facebook Live, and Flipgrid, technology pushes the academy to catch up once again. The world of Artificial Intelligence and robot technology is at the door, not waiting for anyone to open it, but forcefully dismantling the hinges. As many institutions turn their face towards another academic year, faculty, staff, and students must also come vis-à-vis with that which mimics human likeness but which lacks flesh and blood. ChatGPT and its kin models are causing many professors to reboot syllabi, reconstruct lesson plans, and reorient course construction. ChatGPT or Chat Generative Pre-Trained Tranformers is a type of artificial intelligence. This AI is in essence a chatbot that communicates with people in a proto-human fashion. It also has the “intelligence” to generate unique texts. ChatGPT answers questions via prompts humans provide, composes essays, offers advice, and even gives wellness tips. This generative AI automatically produces content as if it is merely chatting. Whereas the most known model is ChatGPT, there are other forms of generative AI tools. Swimming in the AI waters are Microsoft Bing, Google Bard, OpenAssistant, Hugging Chat, Trinka, AutoGPT, and RizzGPT, to name a few. So not to leave Jesus out of the mix, a newly developed Christian ChatGPT, or BibleMate, purportedly fosters spiritual growth and development. Sounds okay, right? It’s another resource for students, yes? Perhaps this tool could carry some of the teaching water? A Bible supplement can’t be bad, can it? Maybe. Maybe not. There could be some benefits to ChatGPT and its family of AI. Students have another research tool. If anyone needs a quick fix, ChatGPT immediately answers when asked. With so much online learning precipitated by Covid-19, such generative chats could lead towards additional academic access. Furthermore, the text-to-speech formats may assist with able-bodiedness and neurodiversity accommodations within the classroom. AI as a teaching tool has the potential to abet grading, creating syllabi, and the developing of ideas to boost classroom participation. However, where there is good, there is naturally downfall. Because ChatGPT continues to generate the more it is engaged, a student could use it to yield a complete research paper. However, these AI tools do not craft citations. Thus, any professor will give much academic shade to such non-sourced work. After all, the point of a research paper is to discern how well one has engaged scholars who agree and disagree with a declared thesis. The “P” in ChatGPT could stand for “plagiarism.” Additionally, ChatGPT does not guarantee accuracy, nor explain the source of its information. Thus some models provide anachronistic information or refer to events or topics through a specific period or year. Occasionally, what these AI tools proffer is incomprehensible. There is more. My point here is to start the conversation…. Standing on the cusp of another year in the hallowed halls of academia, the question of whether to AI or not to AI is a critical one. AI has been around in some form or fashion for decades, and it is not going away. Dare I say students probably know more about its use than professors. Yet all is not lost. To lessen any angst or disgust take a free course. There could be a way to integrate ChatGPT or the like in one’s classes. Professors could use it as a teaching tool to pin improper citation methods and point to inaccurate information, then pivot to sound research methods and personalized class assignments which cannot be “generated.” Again, there is more. Here’s to starting the conversation. Actually, here’s to continuing the dialogue as the ChatGPT train has already left the artificial intelligence station.

For many of my students, voicing themselves is not the issue. Mostly, they are terrified of revealing their authentic voice. Although I empathize with them, my role as teacher obligates me to keep them faithful to its discovery, even if their authentic voice is not a fixed thing. Hence, I approach this pastoral work with the understanding that their respective voices are dynamic and ever-changing, not along a linear trajectory, but within the messy cycles of their precious lives. In this way, my role as teacher is to inspire them to discover a self-authenticated speech power that when activated they can proudly hear and see themselves. And yet despite my good efforts, I inevitably have students who have despairingly succumbed to the notion that their voice is not worth hearing. Achingly, such devaluing of self is often the result of a combination of adverse and even abusive social, political, and economic forces that they have encountered over the course of their lives. For other students, the issue is not worthlessness but often anxiety in hearing those facets of their voices that expose their privilege and cultural bubble. To cope with either one of these voice-silencing impulses, students usually opt for a masking scheme that I call, the plagiarized voice. Here, the goal is to disguise their authentic self, with all its flaws, foibles, and flavors, by appropriating other people’s authentic voice. Beyond simply quoting word for word from another author without giving proper credit, which is a dishonest academic practice resulting in dismissal, students assemble a composite voice using a default collection of words and phrases. There is also the habitual deployment of go-to concepts and argumentation that are indiscriminately plugged into their writing. The entire production points clearly to the concealment of their authentic voice. Another giveaway to this masking tactic is when students carelessly employ—whether in their writing assignments or class participation—trendy keywords, particularly the kind that they think would trigger a better grade from me. Knowing that my scholarly voice reveals an ethical commitment to the themes of “postcoloniality,” “migration,” “trauma,” “empire,” “violence,” “colonialism,” and “marginality,” students surmise that by sprinkling their writing with these words enough times a higher grade will magically appear. It is not that these themes are off-limits in students’ writing, for I teach that they are critical to understanding the cultural, social, and political instincts of the writers of the Hebrew Bible. Furthermore, these inscribed instincts belong to the sacred otherness of the biblical text, and hence we should resist colonizing their language and cultural points of view. At issue here, however, is when these themes and concepts do not cross-pollinate with a student’s authentic self in ways that show their close reading, creativity, and courage. Indeed, the lived experience with imperial violence that permeates much of the Hebrew Bible requires attentive and respectful listening, such that the felt pain behind the text is not diluted by sanitized scholarly jargon. If pain and suffering are common to all humans across time and space, then students should be able to hear themselves in the stories and poetry of the Hebrew Bible. Likewise, theorizing on ancient forms of colonization also requires a spectrum of diverse dialogue partners, some with expertise in literary analysis and others who write about imperial violence from first-hand experience. In my view, inviting a mixed range of reading partners (preferably in nonhierarchical fashion) is vital pedagogical practice that allows my students to fine-tune their authentic voice. In other words, the aim is not to harvest a new collection of catchy phrases but to hear oneself in relation to a rich variety of other voices. For example, when students encounter a rape scene in the biblical text—troublingly, there are many to choose from—how do they respond to the expert critics who gloss over this egregious form of violence? Or if students hear from authors who write about their first-hand experience with exile, does their testimony strike a chord in ways that could help students better exegete the biblical text? Notably, teaching future pastors the Hebrew Bible at a Protestant seminary is in many ways about authentic voice, specifically the divine voice. It is this pursuit of authenticity that my students are expecting—and frankly pay—to receive. The divine voice is not something that can be reproduced through Generative AI, for the authenticity of God is revealed through the realms of faith. And in the end, they will be practicing this pursuit each time they take the pulpit. For it is here that their congregants gather, expecting to receive from them an authentic word from God. As a teacher, when I think of this sacred moment, liberating students from the plagiarized voice becomes even more urgent.