Advice for AI needs to bring clarity: ‘Transparency most important’

| Rense Kuipers

Teacher, indicate whether your students are allowed to use artificial intelligence in their assignments. And student, indicate if you have not used AI in an assignment. That is, in a nutshell, the advice of the ‘AI in Education of the UT’ workgroup, following the breakthrough of ChatGPT, among others. Workgroup chairpersons Robin van Emmerloot and Kim Schildkamp explain the advice.

The intent is for the advice from your workgroup to be applied in the new academic year. What exactly does it mean?

Van Emmerloot: 'It can serve as a kind of guideline for our education, as a result of the emergence of large language models such as ChatGPT. In the meantime, the first examination boards are already dealing with cases about whether or not students have committed fraud with the help of artificial intelligence. Therefore, our starting point was to create clarity for both students and lecturers, for the coming academic year already: in which cases is it permitted to use AI and in what ways should you account for this? The most important concept here is transparency.’

This advice also says that students must also state if they have not used AI in an assignment...

Schildkamp: ‘That is correct. It is in line with what a lecturer has already asked: can artificial intelligence be prevented at all in this day and age, even if it is in a very mild or innocent form? For students, it should create an extra barrier: if you write down that you have not used generative AI, you should check with yourself whether that was actually the case. On the one hand, you can say that it should be common sense not to let artificial intelligence make your assignments. On the other hand, a student can also say: it is nothing but a tool for completing an assignment. It is important that this is done transparently and with integrity in any case.’

And for the teachers?

Van Emmerloot: ‘We have set up a kind of traffic light system for teachers and module teams. For each assignment or group assignment, it must be clear whether or not students are allowed to use artificial intelligence. Or that only certain forms of AI may be used, or for certain purposes.’

Schildkamp: 'Ultimately, that is the most important question a teacher should ask themself: what are the learning goals for my students and does the use of artificial intelligence fit with that? For example, if your learning goal is to improve a student's writing skills, it makes perfect sense not to allow the use of AI. If you want a student to reason and reflect critically, you could, for example, ask ChatGPT to assist with this. However, it is important that the student clearly indicates how ChatGPT has been used for this.’

So prohibiting does not seem to be the intention, at the UT?

Schildkamp: ‘No. Coincidentally, I just came from a meeting in UNL context where we also discussed this. What you see is that some universities deal with the subject fairly conservatively: if a student uses AI, it will be seen as fraud or plagiarism. But you also have universities like the UT that say: it is a fairly new technology, there is no point in trying to keep it out. But how can we ensure that both teacher and student use it properly? You can conclude that at the UT we aim for a human-centered approach to AI. The lecturer and student must be in the lead, not the technology.’

The interviewees

Kim Schildkamp is a professor at the ELAN teacher programme, in ‘Data-Informed Decision making for Learning and Development’. Robin van Emmerloot is an educational advisor to CELT (the Centre for Expertise in Learning and Teaching) and TELT (Technology Enhanced Learning & Teaching).

What are the next steps to properly integrate artificial intelligence into education at the UT?

Schildkamp: ‘It is important that we work on the AI literacy of teachers and students. That is why we are now creating a magazine for teachers in which information about AI in education comes together, from ethical considerations to inspiring examples. Moreover, we can do even more research into the applications of AI in higher education.’

Van Emmerloot: ‘CELT and TELT can support teachers. In addition, we must take into account the equality of opportunity at the UT. You see that providers use certain revenue models, which will not be equally affordable for everyone. If we, as the UT, were to work with a provider, we would have to ensure that the technology is accessible to everyone.’

And the further future? Is there no way around artificial intelligence like ChatGPT?

Schildkamp: ‘No, I do not think so. And the arrival of ChatGPT did not completely take us by surprise either, I dare say. At the UT, we have been working on the subject for some time. Even before ChatGPT broke through, we already had an AI and Data in Education network under the banner of the Digital Society Institute, with almost sixty affiliated employees. There are still plenty of opportunities to properly apply AI in our education, for example in the automated marking of exams or to use it as a feedback system or as an extra teacher or student in the lecture hall. Digging your heels in the sand does not seem to make much sense.’

Van Emmerloot: ‘In addition, it seems useful to me that certain questions are asked within study programmes that go a little further than: ‘what do we do with possible plagiarism or fraud issues?’. Malicious students have always found ways to take advantage of situations and will continue to do so. The deeper question that must also be asked within a study programme is: in which direction is the working field developing with the help of artificial intelligence – and how can the educational offer be adapted accordingly? That question goes beyond what we can do as a workgroup, but such a fundamental question will have to be asked within study programmes.’

Stay tuned

Sign up for our weekly newsletter.