Despite its myriad of flaws, AI is on a rise, both in society and in academia. With it, come new ways for students to defraud their way to a degree. And thus, the university thought, some advice to staff may be in order.
The only problem is, that advice they came up with is a complete mess. Almost right off the bat it asserts that potentially all software may contain AI — which is preposterous. They also copied verbatim a subtle, nuanced, and narrowly tailored suggestion from a publisher and turned into an blunt and overbroad disclosure requirement. Do I now have to disclose the use of my hard disk storage controller’s firmware? It could, after all, contain AI, right?! So much for clarity. Anyone who even has a vague idea how computers work knows that what this document recommends is absurd. If you put this in a policy recommendation with a straight face, and also don't put your name on it, I cannot do anything but disregard it on its face. I just can’t take this seriously.
But what is perhaps more troubling to me is the lack of critical, legal, and philosophical thought. Fraud is something of all ages, of all generations. But because AI is new, people immediately reach for drastic measures, even though there are many legitimate ways in which it could be used. The at this point clichéd spell-checking and editing assistance, but also classification and modeling come to mind. These are either incidental uses, or are disclosed naturally in the course of reporting one’s work.
So let’s do that thinking now. A misdeed often consists of two parts: an action and an intent. So it is not the tool that matters, but which way and with what intent it is used. What we actually want to prevent is not the use of AI per se, but its fraudulent use. During an exam, we want a student to lay bare their thoughts for evaluation of their knowledge, understanding and skills. To defraud is then to knowingly and intentionally subvert an accurate assessment thereof. What we should ask is whether a student subverted their examination, and whether they did that with the intent to defraud. If so, then it is fraud. This need not be hard.
You may have noticed that I lifted some of the wording from the student’s charter — fraudulent use of AI is thus already covered! However, I understand, given that it’s a new technology, the particulars of those questions can be hard — and we still need to subvert those ‘smart students’ trying to ‘unintentionally’ commit fraud. But what we don’t need yet another set of additions to the list of vaguely worded, shoddy, and arbitrary fact patterns considered fraudulent. That way lies madness. Instead, we need guidance on how to understand fraud, perhaps both in general but especially in the context of these new technologies.
Let’s not pretend this will be easy. But only if we, students and staff alike, learn to think critically about what fraud is, can we stay true to our pursuit of knowledge. Putting in the work will keep us all honest.