UW professor looks for ways to make the ethical best of AI-enhanced learning

When generative artificial intelligence, or AI, dropped into our lives two and a half years ago, educators around the world went into a panic. Suddenly there was a chatbot easily accessible to students that could help them cheat on tests and assignments and likely get away with it. Fast forward, and now students are complaining that teachers are using AI in their jobs, and they don't like it. Should they?
Katy Pearce is an associate professor in the University of Washington's Department of Communication. She researches social and political uses of technologies and digital content. KUOW’s Kim Malcolm talked to her about how students and teachers are navigating their use of AI technology.
RELATED: Learning tool or BS machine? How AI is shaking up higher ed
This interview has been edited for clarity.
Kim Malcolm: You were quoted in a New York Times piece recently about how college professors are using AI in their work. One student complained when she realized her professor had used an AI chatbot to grade her essay assignment and offer some “really nice feedback.” Is that a fair complaint from the student?
Katy Pearce: Well, I think it is a fair complaint from the student, because students and their parents are paying a lot of money for their education. When this instructor used AI to generate feedback on an assignment, that violated what that student expected to happen, and so I understand why that student's upset.
I'm curious what the rules are for using AI for students at the University of Washington.
At the University of Washington, and I think this is true for most universities and colleges, there is no university-wide policy, and I think that's a good thing, because instructors should be allowed to decide within their own classroom environment what sort of policies they want to have. Now with that, I will say that there are a wide variety of policies, and this can sometimes be confusing for students.
This isn't so much the case now, but was a couple of years ago. In my own course, I have pretty liberal AI use policies compared to some of my colleagues, and when I first started doing that, students couldn't understand because they were receiving the message from many of their other instructors, absolutely not. None of this is allowed. AI is bad. Don't use it. I think now there's more of a range of use and so having varied policies isn't as challenging for students to understand, but certainly a lot of instructors have a lot of different rules.
How are you using AI as an instructor and a professor?
I use AI a lot in my classroom environment, and I am transparent about that with students. One example is, for a lot of my larger assignments, I've created chatbots. It's not just like go on to ChatGPT and ask for feedback. Instead, I train the chatbot on years of my past feedback that I've given on this exact same assignment, or a variation on that assignment. And so now for students, they can go at two in the morning, which is when many of them are working on their projects anyway, and they can ask for feedback that really approximates the feedback that I would give.
The other thing that's really nice about that is that that chatbot is never tired. It never is exhausted by what they've been asked. So the students can ask again and again and again. I found that both students that maybe wouldn't be inclined to come to office hours or ask for help will use that chatbot quite a bit. Also, students have told me that they appreciate that what they're turning in has already kind of been checked by me. I would never do this to replace my feedback to be sure, but it's a step in between the students’ draft and what they hand in to me. So that's one way I use it.
We do constantly hear how quickly things are evolving and developing in the world of AI. I'm wondering if you and your colleagues are seeing a clear path for using AI in a positive way that's going to benefit these students, and society, when they graduate.
I think, to be fair, not all instructors are on the same page about this, which I understand. There's a lot to be apprehensive about, but I think that, as an educator, we want to prepare students for the world that they're going to enter into. And the fact is, the working world today is very different than the working world that any of us that are teaching entered into.
For me, I look out at my students, and I think about back when I was in college, ‘back in the late 1900s’ as my students like to say. The jobs that my friends and I were applying for, or were excited about taking, those sort of entry level jobs, those tools are entirely done by AI now, and I am worried for my students that a lot of those level of jobs just don't exist. And so, I tell my students very openly that I want in every single class session for us to be working on skills that will help them have things that they can present to potential employers that they are good at, that AI, at least for the foreseeable future, cannot do well.
Ethical decision making, collaboration, creative problem solving, being conscious about differences, people coming from different cultures, different backgrounds, AI can't do that. And so every single class period, we're working on those skills, because I want them to go into a workforce being able to compete with AI.
I tell them I am really worried for them, because we don't know what things are going to look like, but I think that the kids are all right. They are learning to use these tools in creative ways. And I think that as educators, giving them good ethical frameworks about what is and what is not appropriate use will serve them well, no matter what things look like 5, 10, 15 years from now.