Writing in the Age of AI
Susanne Hall, courtesy of Suzanne Hall.
Teaching professor and Hixon Writing Center director Susanne Hall on how AI tools and policies are impacting writing instruction at Caltech.
By Omar Shamout
Susanne Hall, a teaching professor of writing at Caltech and director of the Institute’s Hixon Writing Center, teaches undergraduate writing courses and helps support and improve Caltech’s writing curriculum. She spearheaded the creation of the Division of the Humanities and Social Sciences’ policy on generative AI, and she advised on the Institute’s guidance on the use of generative AI and large language model tools, which was released in October 2023.
Hall’s interest in publication ethics led her to join the first major study of text recycling, a multi-institution, NSF-funded initiative investigating the reuse of one’s own writing in STEM research, as a co-principal investigator. Here, she discusses how the HSS policy impacts students and faculty, and how generative AI tools are changing the collegiate learning experience.
How did the HSS AI policy come together, and what was your role in its creation?
After ChatGPT came out in the fall of 2022, I first focused on how this technology would affect teaching and learning in the HSS division, where my expertise is in writing, reading, and critical thinking. I am part of a writing committee that includes myself and two humanities faculty members. So, I started talking with them about this, met with the humanities group, and met with the social scientists. It was clear in those early conversations that people had very different ideas, different comfort levels, different areas of concern, different things that they might see as opportunities or things they were excited about.
Like many institutions, we ended up with a policy that empowers faculty to determine the learning goals of their courses and consider how students’ use of the tools might enhance or impede learning. By June 2023, our division had created and approved a policy. I was also part of the group that advised on the campuswide policy.
How would you characterize the HSS guidelines?
Two years later, there are a lot of different contexts for our work. We essentially communicated to students that for any use of these tools, the faculty member running the class needs to say, “This is allowed.” So, instead of assuming you can do whatever you want until someone makes a rule that says you can’t do it, you should assume you can’t use the tools unless told otherwise. This is important, because if we are allowing students to use these tools, we also need to provide instruction on how to use them effectively and ethically, which is often more complex than it seems at first.
There’s also internal guidance attached to the policy that articulates for faculty a few aspects of these tools their course-specific polices should address. For example, if an instructor really doesn’t want students to look at the AI search results in Google as a part of a research process, they need to explain this, why that is, and how someone can actually avoid seeing the AI search results if they don’t want to see them.
Have you found students receptive to the guidelines?
My experience with our students is that they’re here to learn. With students who have such tremendous capabilities, curiosity, and creativity, I don’t find that it’s hard to convince them when you take the time to explain it: “Here’s a learning task I want you to do, here’s the way I want you to do it, and here’s why I want you to do it this way.” It’s quite rare that a student rejects that.
A challenge is that our students are tremendously busy, and sometimes they are interested in using generative AI tools because they believe it will be efficient. As a result, there’s a new imperative for faculty to make the case to our students for what they’re getting out of the assignments that we give them when we ask them to do them in a way that might be more labor intensive. We know that the work is where the learning happens. This situation gets us reflecting actively on, “What is the point of this assignment? Why do I think this is worth 10 hours of this person’s time, and how do I help them see that it will be?”
How does the Caltech Honor Code come into play?
A lot of the discourse around academic integrity and generative AI sort of assumes a hunt for cheaters and for students who are trying to avoid doing the hard work of learning. Again, I don’t think students who want to avoid the hard work of learning choose to come to Caltech very often. You can’t avoid it here.
With the Honor Code, I think there’s a distinction to make. First, what do you do when a student is clearly violating a course policy and it’s an academic integrity issue? We refer those cases to the Board of Control. But if it’s a misuse of the AI tool that’s not an ethical violation, then it’s an opportunity for learning.
I do think faculty are aware that students are juggling many varied expectations for their use of these tools across coursework, research, and extracurricular activities. That can be a lot of students to manage, and we can help them do this. Beyond navigating policies, we have to teach our students to be problem solvers and flexible thinkers, and train them to be ethically engaged in whatever research or careers they go on to beyond their work here. Hopefully, they will learn to recognize that there are good reasons why people have different opinions or different standards for whether, how, and why generative AI tools should be used.
Because faculty members have the ability to set their own course policies surrounding AI, how does that affect the work of the Hixon Writing Center and how staff interact with students?
As a team, we’re talking about it frequently. Because there are no universal policies, writers face different situations, and a student may not fully understand the policy. And because it does have academic integrity implications, it’s really important.
Sometimes, what we’re helping students see is that they haven’t been as aware of course policies as they should be, so then they go back to the professor and check in on what those policies are. Although we were always looking at writers’ individual contexts before, it’s more complex now: How do you help a student write well who is allowed and encouraged to use AI to brainstorm, to organize, to revise, to do library research? That may end up looking really different from how you help a student who isn’t allowed to use those tools. That’s the kind of skill building we’ve been working on for our professional staff and our peer tutors so that they’re equipped to guide students through such scenarios.
What are some of the ways faculty are incorporating AI tools into their courses?
One person who's doing it in really interesting ways is Adam Blank [teaching professor of computing and mathematical sciences], who teaches some of the introductory computer science courses. You would think that Adam would be confronting a completely different set of considerations from the ones I'm thinking about because they are teaching coding, and I’m teaching academic writing. But Adam and I have found a lot of points of overlap. There’s a foundation of critical thinking, knowledge, and disciplinary awareness that you have to build before you start bringing in these tools as significant collaborators to your work. If you jump into relying on generative AI tools too soon, then really all you’re doing is being an operator of the tools. And if all you can do is run a tool, then you have very little potential value as a researcher or potential employee, since agentic versions of the tools are likely to replicate what you can do. That’s not going to be a viable path. Adam is very thoughtful about how we help students build a foundation of knowledge and critical thinking, while not doing it in a way that pretends we’re still in 2021.
My colleague Frederick Eberhardt [professor of philosophy] teaches a very interesting course on AI ethics. Students in that course are encouraged to use AI in some contexts and discouraged in others. They’re exploring these very questions of how large language models work and how they shape how we think and comprehend the world. So, it makes sense in that context to have students using the tools in certain assignments and then thinking critically about what their use of the tools meant to their learning.
Our Center for Teaching, Learning, and Outreach has done a nice job of offering faculty a number of ways to engage with one another on this topic. The center recently convened a panel of lightning talks, where faculty from across the disciplines presented on varied ways they are using or teaching students about these tools in our courses, and I learned a lot from being a part of that event. They are helping faculty, including those who teach core curriculum courses, look to the future.
Do you allow students to use AI tools in the courses you teach?
Each spring, I teach a class on writing narratives about STEM research. It’s essentially about memoir writing by scientists and engineers. We also look at TED Talks. We ask: How and why might we tell first-person stories about our lives in STEM? In that class, my policy is that if students want to use a generative AI tool, I’m open to that, but they need to ask me about it first, so we can discuss it together. I’ve taught it twice, and no one has come to me and said, “I want to use these tools in my narrative writing.” I’m not seeing evidence in their writing that would indicate they are using them, and I don’t think they want to. So, it’s interesting to see that in a context in which students are writing creatively about their own lives, that’s something they want to do independently.
I anticipate that I’ll eventually incorporate some activities in that course where we experiment with the tools, but I haven’t done that yet. That kind of exploration and discussion would be valuable for them and for me. I want to be open to what students might be able to do with the tools and continually appraise the opportunities the tools create as well as what harms they may cause. This spirit of critical exploration very much characterizes the moment we are in. We have to expect that some of the first answers we reach about whether and when to use these tools will not be our final answers. The models themselves are evolving rapidly, and our understanding of how they affect student learning is expanding. I’m excited to continue to work with our students on exploring what is possible and where the dangers lie, and I take seriously my responsibility to protect their ability to learn and grow during their time in college.