A few weeks ago, student columnist Jenna Zaagsma threw a stone into the pond with a column entitled #FreeTheExamBoard. Marleen Groenier, chair of the examination committee of technical medicine and of the assembly of all examination committees, immediately responded to that column. 'When we talk about generative AI, a lot is about how we expect students to work with it, or how scientists can make their publications more beautiful or their grant proposals sharper and more promising. But the perspective of the examination board is missing from that whole story. Because save the examination boards from generative AI, that's exactly what it is.'
Because with the appearance of ChatGPT, Gemini, Claude, Copilot and recently the Chinese DeepSeek, generative AI has become part of many people's daily lives one way or another. Groenier outlines how problematic it is for examination boards. 'In my own examination board, it has been a fixed item on the agenda for a long time, which says a lot. There are guidelines at the front end and we see that teachers are working on what is and is not allowed in the field of generative AI. But I don't think generative AI has a good place in our education at the moment. Of course we are in a learning organization, but it strikes me that we have hardly made any progress in the past year and a half, or so.'
'Every time another case comes in, we are at our wits' end' - Marleen Groenier
Distrust of every written text
In practice, if fraud is suspected, a lecturer reports it to the Examination Board. But such a report, the examination board can hardly do anything with it, says Groenier. 'There are plagiarism or AI checkers, but the accuracy of the detection of this software is rubbish. And then there is also an arms race between that software and the generative AI software itself, which is always one step ahead. In addition, sometimes a text that is very well written by a student is incorrectly identified as AI – or vice versa. At the end of the day, we can hardly gauge it as an examination board. Every time another case comes in, we are at our wits' end. It is such a complex struggle that we can hardly declare a report admissible.'
She is familiar with a striking case, in which a teacher – wrongly it turned out later – accused students of fraud with generative AI. 'That was a bachelor's assignment, so you can imagine that such a discussion gets very heated. The teacher eventually apologized. That case says something about the feeling that prevails at the moment: a kind of distrust that accompanies every written text that is submitted.'
Academic integrity
According to Groenier, there is no way to control this. 'Checking for generative AI abuse afterwards is not doable. I think we need to look more at the front end, provide tools for designing assessment differently. Yes, I think there is definitely a discrepancy between what a teacher usually thinks they can offer and how we as an examination board should then assess whether or not there has been fraud.'
With that problem in mind, what kind of 'policy' does the UT actually have in the field of generative AI? Formally, the unauthorized use of generative AI is fraud, Article 6.7 of the 'student charter' makes no bones about that. Moreover, if fraud can be conclusively proven, then you shoot up the so-called 'sanction ladder'. As an academic institution, the UT attaches particular importance to academic integrity and repeat offenders are punished more severely.
But when is the use unauthorized? Does that start with 'sparring' with ChatGPT, or by having Microsoft Word automatically complete your sentences with a word suggestion? A year and a half ago, a working group 'AI in education at the UT' came up with an advice. That advice – in a nutshell: teacher, indicate whether your students can use artificial intelligence in their assignments. And student, indicate if you did not use AI for an assignment. But, as Groenier points out, the struggle has by no means disappeared. The advice is there, but there are no specific AI rules or guidelines. Apart from the UT-wide rule in the context of academic integrity – which boils down to 'thou shalt not commit fraud' – it is up to the programmes and lecturers to set their own boundaries.
'It is extremely difficult for an examination board to fully assess the content, the assignments are often too topic-specific for that' - Francesca Frittella
AI hub and oral inspections
To help them with this, CELT (the Centre of Expertise in Learning and Teaching) came up with an 'AI in Education' hub a few weeks ago. This is an environment in Canvas for teachers and programme directors, about the fundamentals of AI in education, AI as a teaching assistant, AI literacy and AI in the context of assessment. ‘This hub should help them to respond proactively to developments in the field of AI,’ says Francesca Frittella, one of the initiators on behalf of CELT.
She says she is familiar with the problems with AI and testing. 'I am also an external member of the Management Sciences Examination Board at the BMS faculty. It is becoming increasingly difficult to detect whether generative AI is being used, I can certainly confirm that. Especially in the case of take-home assignments.’ Teachers are also surprised, Frittella knows. 'We have tried to have certain assignments that they had drawn up done entirely by ChatGPT. Impossible, they thought at first, but the conclusion was that the end result would have yielded a passing grade. That's a bit of a shock for them.'
One of the recommendations her examination board makes to teachers, in case of well-founded doubt: hold a so-called oral inspection to collect evidence before reporting the case to the examination board. 'It is extremely difficult for an examination board to fully assess the content, the assignments are often too topic-specific for that. The devil is in the details. I am familiar with a case in which the lecturer saw an inconsistency between the argumentation and the source that was put forward. The student used the source to support the argument, but it turned out to contradict it. The teacher realized that – precisely because of his in-depth professional knowledge. And the student eventually confessed during that oral inspection.'
Park assist
When does the end justify the means? In the context of generative AI: when would you or should you not let students use the 'helpline'? 'The context of the course and assignment are very important,' says Groenier. 'It's very different when you have to write a report or paper, compared to calculating bridge constructions or programming. Especially in the latter case, allowing the use of AI is very logical, I understand from the Biomedical Engineering programme.'
'It ultimately comes down to the question of which skill a student needs to master' - Pieter Roos
Pieter Roos, last year's senior teaching fellow in the field of digitisation and former winner of the Central Education Award, agrees with Groenier's reasoning. According to him, Roos' own subjects such as fluid mechanics and mathematical physics or water systems do not have any direct conflicts with AI. But within his civil engineering programme, AI is certainly a recurring topic of discussion. 'It ultimately comes down to the question of which skill a student needs to master. You can say: I'm going to sideline AI completely, or you opt for a more open attitude: we know that AI exists and with that in mind, I want to test a student's skills in a specific field. You do this by very consciously looking at what AI can do and according to which standards you assess a student. In any case, the development of AI should have implications for the learning objectives of a course and end goals of a study programme.'
Roos sees an analogy with driving tests. ‘Modern cars nowadays all have a park assist function. Are you going to tell the person doing their driver’s test that that button should not be pressed? Or are you going to say: we're not going to judge you on this specific skill, because that's what the parking assistance button is for. It is the same with AI in education. You have to make these kinds of decisions as a study programme.'
'Theft of valuable labour'
Or you could say as a study programme – and as a university: there is no place for artificial intelligence here at all. That is how associate professor of applied philosophy Nolen Gertz looks at the topic. 'The best defense is to teach students the added value of not using AI,' he says. 'I'm not so much concerned with policing plagiarism, it's about the quality of education. You can't guarantee that if you encourage the use of AI. We must do everything we can to prevent the use of AI and do everything we can not to promote it.'
'Generative AI is nothing more than automated theft of valuable labor. If you were to call it that, would you still use it?'
Gertz puts forward numerous arguments against the use of AI. 'In a way, it is hypocritical if one programme encourages it and the other does not. That gives mixed signals. And if we, as a university, say that we want to be sustainable and don't want to promote air travel, why should we promote the use of energy-guzzling AI? We have to be more consistent in our values.' Above all, Gertz thinks the term artificial intelligence is inappropriate. 'It's not artificial, it's not intelligent. ChatGPT is nothing but a word generator on steroids. Because it draws on all the work that people have done, it is actually nothing more than automated theft of valuable labor. If you were to call it that, would you still use it?'
Laziness
According to Gertz, universities are too easily going along with the narrative of the big companies that say AI is here to stay. 'In this way we make ourselves redundant. I am concerned about deskilling. That refers to a university's right to exist: we are not here to teach students shortcuts to do an assignment, to get a grade – and ultimately obtain their diploma. We are here to actually teach them something, that they can fathom, master and reflect a certain topic. There are no shortcuts for that.'
It is the laziness that comes with AI that Gertz particularly resents. 'And the human aspect. If you would previously ask a colleague for advice, but now enter a prompt in ChatGPT, what do you gain from it? And if you say it saves you time, then you might as well quit your job. Because you didn't become a teacher to work as efficiently as possible, did you? Just like you didn't become a journalist to let AI come up with questions and write this article. Anything that is seen as a challenge is framed as something that AI should save us from. But if you use it, you are doing yourself a disservice. Both as an employee and as a student.'
Responsible use
Frittella does not agree with Gertz that the UT should ban the use of AI. 'That would be at odds with the Executive Board's statement on this. AI is here to stay and we should teach students to use it responsibly,’ says Frittella, who points to the term AI literacy. 'If you look at the European Union's digital competency framework, you already can see there that the use of AI tools is regarded as a key component of information literacy, hence, if we would like our students to be information literate, we should include AI in the equation. Also considering the EU AI Act, requiring all employees of public organisations to have a fundamental level of AI literacy, I don't rule out the possibility that AI literacy will become a kind of compulsory course here at some point, just like the course in the field of cybersecurity.'
Because there is still work to be done, she concludes on the basis of a recent survey. 'Of the 53 study programmes at the UT, only seven indicated that there was no AI in the curriculum, or they were not sure. But when asked whether AI was systematically applied within the curriculum, 68 percent of the programs said no. Now it is mainly individual teachers who make the choices. So there is certainly still a gap there.'
'Students want their piece of paper to be worth something; they attach great importance to the reputation of their study programme and diploma' - Pieter Roos
Frittella believes that artificial intelligence belongs at this technical university. 'Not so much to teach students how to use it, but to activate a critical mindset. When we say we are a people-first university of technology, you want the human perspective to be leading. Think of it this way: when someone here has graduated, later becomes a hospital administrator and someone within that organisation proposes to introduce a new AI-powered tool for staff, you want that alumnus to point to the possible challenges and make the right choices.'
Encourage, discourage, use responsibly... That stone thrown into the pond by the student columnist of U-Today still leaves the necessary ripples in the water: who saves the examination boards? After all, where people study, there is also academic fraud lurking. Frittella and Groenier say that the aforementioned AI literacy should go hand-in-hand with regulation. 'In the end, that remains a balancing act between trust and control,' says Groenier.
Sense of justice
And a UT-wide policy for AI? According to Frittella, there is a need for this among teachers and programme directors because of the prevailing lack of clarity at the moment. Although it is desirable to create and apply rules and guidelines as 'locally' as possible, Roos adds. 'If you want to organise something somewhere, do it at the study programmes themselves,' says Roos. 'They understand the context, the field. The learning objectives and attainment targets also differ per programme. You absolutely have to take that into account.'
'Currently the attitude is: better something than nothing. The problem with generative AI is there, without a doubt,' Groenier repeats. 'But I don't have a solution at all. I would therefore also like to call on readers, anyone who has an idea for a solution, please share. Write an open letter, whatever.' Roos also does not have the solution available, but does think that students can play a role in coming up with it. 'Students want their piece of paper to be worth something; they attach great importance to the reputation of their study programme and diploma. There is also a sense of fairness and justice underneath, that they don't want everyone to be able to just rush through parts of their programme. The vast majority of students are here with the best intentions, of course. So I certainly see opportunities to find a solution together with students.'
Maybe one thing remains clear: that Gordian knot in which generative AI has worked the university, that knot is anything but untangled.