Using ChatGPT as a training buddy to defend a PhD thesis

| Rense Kuipers

Stefano Nicoletti, who defended his PhD thesis on Tuesday, prepared for his defence in a special way: having ChatGPT pose as the committee. The researcher at the EEMCS Formal Methods and Tools group about the preparations, benefits and concerns. ‘It can help you manage your anxiety.’

Photo by: Frans Nikkels
Stefano Nicoletti during his PhD defence.

How did you come up with the idea of using ChatGPT to prepare for your thesis defence?

‘A lot of the credit goes to my colleague Lisandro Jimenez Roa. We were talking about our defences, since his is coming up in February. He’s a big proponent of ChatGPT and we were discussing the advanced voice mode of ChatGPT, which is available in the paid version of the app. So we were talking about how you could actually have a discussion about the contents of our theses with a literal AI-trained voice – asking you questions. Admittedly, that started as a joke. But later I turned that idea into reality to rehearse for my defence.’

How did you go about?

‘I gave instructions about the moderation style, so ChatGPT knew about the setting and the behaviour of a committee. Then I uploaded the introduction of my thesis and background information about the committee members. I also added comments the committee members already gave on the thesis. With all that information, I asked ChatGPT to imitate the role of the committee and ask questions for 45 minutes.’

'Some questions ChatGPT posed were literally repeated by the committee'

Since you successfully defended your PhD thesis last Tuesday, what conclusions can you draw?

‘It was a very interesting experience, to say the least. First and foremost because some questions ChatGPT posed were literally repeated by the committee. At least in the case of two questions. The other thing that struck me was that this way of practicing helped me to manage anxiety. You never really know what to expect, but I felt more prepared. Even for the unexpected questions.’

AI is also subject to some controversies. Did this ‘experiment’ also raise some concerns with you?

‘I think one important concern is what happens with the data you provide as input: where will it end up? That’s why I made sure to only use publicly available data, also with regards to the biographies of the committee members for instance. I also asked ChatGPT to forget the input after the rehearsal. You have to be very cautious about that with AI and consider the motives of the companies who offer these kinds of tools. I’m a big proponent of privacy.’

Besides that, I can imagine these kinds of ChatGPT rehearsals can be beneficial to a lot of colleagues and students?

‘Besides being cautious about data, I’d say it can be very beneficial as a kind of tutor. You shouldn’t have ChatGPT do your work for you of course. Like with writing code. I don’t ask it to write my code for me, I’d ask to help me figure out the right code. And in the case of presentations: yes, rehearsing using the advance voice mode can help with overcoming some nerves. Though I also rehearsed in front of colleagues. We’re a high tech, human touch university for a reason, right?’

'What you now see are the big what-if questions about where AI is going. I think it’s maybe more prudent to ask the questions about the impact it already has on society today'

You conducted research into the safety and security of new technologies like self-driving cars and drones. Is there any overlap with that theme and AI?

‘It wasn’t necessarily part of my research, but I think it will be on our group’s agenda. For instance, we also focus on safety-critical systems, like power plants. How artificial intelligence will be used in those kinds of systems – or is already being used – is an interesting question.’

So if you’d look into a crystal ball, what direction do you see the use of AI going?

‘I also have a philosophy background, so I’d always say that these kinds of instruments need to be developed with the advancement of humankind in mind. It has its advantages at the moment, but it was developed for a purpose – by private companies. So you have to keep that in mind when using it. It’s always best to include the people who it affects in the process.

What you now see are the big what-if questions about where AI is going. I think it’s maybe more prudent to ask the questions about the impact it already has on society today. Take the childcare benefits scandal (toeslagenaffaire in Dutch, ed.) for instance… These are complex systems already being put to use – with major impact.’

Stay tuned

Sign up for our weekly newsletter.