AI versus the universities

    How can you test students if their work is being churned out by artificial intelligence?

    Lots of us think we can spot AI prose, with its perky, helpful, trying hard tone. University tutors used to be able to spot it quite easily. But these days only the most naive students just cut and paste whatever ChatGPT outputs into a document. Many will mess around with the text, asking for a different register, changing adjectives and verbs, reordering points and generally trying to confuse AI-detection software. It makes the job of a marker very difficult.

    Even so, university staff can often tell. “I send back work all the time now, purely because it’s clearly AI,” says one lecturer in an FE college in the south of England. How do the students react? “[They] range from irritated to embarrassed.” She doesn’t entirely blame them. “They’ve seldom seen actual books… AI access is merely an extension of their experience of learning, especially from the Covid period.” In any case, when a lecturer returns work it is still counted as completed, so there is little disincentive to cheat. The problem comes when the same students fail their oral assessments because they did not do the reading or the groundwork.

    Universities generally take a stricter approach than colleges, and may refer suspected AI work for further investigation. But these are time-consuming and hard to prove. Only a second or third offence would normally lead to suspension or expulsion.

    So many students now use AI – whether within the rules, to cheat, or somewhere in between – that universities are scrambling to work out how to respond. Moving to in-person exams and vivas (oral tests) would be a solution. But these are expensive, unpopular with students and would lead to some of them failing. That might be fair. But it is not what students pay for. Nor will it help struggling universities to bring in the tuition fees they need. 

    One top university has recently been toying with bringing in invigilated exams but decided against it, on the grounds that the degrees were advertised as coursework-only. Besides, “the skills you’re testing are memory and how fast you can write”, says Cathy Elliott, a vice-dean at University College London. Neither of these, she points out, is going to be crucial in the next few decades. And wearable tech may soon make it hard to invigilate even traditional exams.

    Most institutions have attempted to be pragmatic about AI, recognising that students will probably try it and setting out what constitutes acceptable use (for example, giving it a title and research and asking for an essay structure, or asking it to an explain an article that’s too difficult to understand), what is unacceptable, and what is acceptable but not best practice. “This is a generational tech change; you’re going to have to use it,” says Elliott. “We’re trying to think about how to use AI well and about what it’s going to mean for our subjects.” 

    Some teachers are deliberately setting assignments that are difficult to complete using AI. They might ask students to fill in a grid by hand or write a very short critical review of several different texts.

    But if you raise the issue with students, “there’s so many differing opinions and no clear rules that you can come across as a Luddite”, says the FE lecturer.  Many academics are nonetheless aghast at what is happening. In social sciences and humanities, they see AI as an existential threat to the value of a degree. What is the point of doing the reading or putting the effort into an essay when AI can take the strain? How will employers know who put in the work and who didn’t? “I’m actually quite sad and angry about all this,” the lecturer says. “Imagine a generation of wonderful young people being failed in such a monstrous way … the system will ultimately pass mediocre, Al-generated, inaccurate assignments, and then a job market simply won’t accommodate them with their lack of skills.”

    What do the students themselves think? “There is nothing I’ve submitted that even has an AI thought, but it must be tempting,” says one part-time postgrad at a London university. Two people on his course have been suspected for using it for a marked assignment. “One person that has additional needs used AI to come up with a very good idea, and I can’t tell if I’m annoyed or not. It is a bit frustrating: I’m not not resentful, and I imagine lots of students who are really putting in the effort might feel very resentful.”

    And unfortunately, for some international students with very little English, AI or essay mills are the only way to complete assignments to the standard required. A different postgrad spoke of her frustration that Chinese students could not participate in seminars but were seemingly able to submit fluent essays.

    Ultimately, teachers are trying to appeal to students’ consciences and sense of honesty. Elliott teaches a politics of nature course and urges her students to avoid AI because of the large amounts of carbon and water it requires. “But if you do, I ask you to write for me how you’ve used it, and how it helped you or got in the way. The majority say, we’ve decided not to use it. We’re excited to learn, and we think we do better learning without it.” The London postgrad is also holding out, though he finds it quite interesting to see how Claude, ChatGPT and other LLMs come up with different responses. “I feel like I want to be testing myself. This is a lower stakes situation for me” (he has a full-time job). “If it wasn’t, I’d probably be using it more.”

    You can be sure that not everyone shares these scruples, especially if they are working a part-time job or have other responsibilities that keep them away from the library. After all, who’s harmed when ChatGPT writes your essay? Only your conscience and your ability to synthesise complex ideas. At 2am with an essay due the next day, none of that seems very pressing. But at what point does the credibility of a university degree disappear? Discuss. Try not to ask Claude.

    Discussion