Educators looking to weed out plagiarism among students face an impossibly difficult new technological challenge.
Originally published by Sydney Morning Herald here
Our online feeds are abuzz with the astonishing capabilities of a new open AI program called ChatGPT. What it can do is remarkable! It can write essays, debates, short-answer questions. For the lazy university student, it can be deployed to complete essay assignments. They can simply enter the topic, state that they want to include academic references, nominate the referencing style required by their university, and in around a minute their assignment is done and ready to submit to their lecturer for marking.
Content created by this program also evades detection by most current software designed to uncover plagiarism (on which universities rely to weed out cheats).
One student told me in a research interview how happy he is to be at university at this exact point in time because he can use this AI program to complete his assignments, knowing that it usually takes universities two to three years to catch up with new technologies. He said by that time, he will have graduated.
There are so many issues at hand here. This student was studying to be a teacher. If he didn’t complete his assignments himself, will he have the knowledge needed to teach? What if he had been doing a medical, law or finance degree? The issues are similar. There are obvious negative implications for the industry, clients, and workforce.
Large scale “someone doing your assignment for you” issues no doubt impacts Australia’s economic competitiveness. But what if the world’s students were doing this? What are the global implications? This issue isn’t restricted to universities, it’s relevant to school learning, and activity across almost every kind of industry and workplace.
As someone who studies our relationship with technology, I’m in awe of the capability of this new technology. But as a university lecturer, it’s got me asking “What is important knowledge”? And moving forward what can (and should) we get AI to do for us, and what (will and should) remain as human thinking and output?
We cannot hold back the tsunami of AI that’s heading our way. While we may think of students as cheating if they use AI to do the assignments for them, can we also think of them as strategic, as making effective use of the resources they now have at hand? Which person would you employ? The crafty one who understands the capability of AI? Or the one who relies on old methods?
As universities gather in the first semester to discuss this turn of events, and how to reshape students’ assessments so we know it is their work, this can only be a momentary pause on the real issue. What is important knowledge today?
Misinformation, disinformation, fake news, and content dressed up as real and factual has been eating away at us for a while now. All of us have fallen into believing something that is not true and correct online. AI also is not perfect. As I proofread the reply to an assignment question I fed into ChatGPT, what I read was good and would elicit a pass mark. But it had limitations. I would have marked it 60/100. The content was written in a readable and coherent way, but because I knew the subject matter well I noted some errors. It was not all correct.
Against this backdrop, and across the research I have been doing for many years regarding our relationship with technology, knowledge that is valuable today is the ability and skill to detect real and factual from the rest.
It’s not about denying the fact that we live in the world that we do. It’s also about how to work with the not-quite-right, not real, fabricated, created and downright wrong information that we come across multiple times a day, every day. It doesn’t mean life is doomed. It just means because of the onslaught of ever-innovative technology, life is changing, and the knowledge we need to develop also needs to change.
For university students, school students and every user online, from this time onwards we will not be sure who or what created the content we engage with online. Work reports, assignments, social media content, news stories, doctors’ reports. The list goes on. We should not be afraid of AI because it is not going away. Rather we should seek to shape and advance the knowledge we need to prosper in this environment.
As for the students who think now is the prime time to be at university, if we are slow to change and respond, then they’re right.