7 days ago
Should lecturers use AI to grade papers?
JULY 13 — More and more educators are saying yes (albeit perhaps not out loud). I'm going to try to explain why it's not a problem ethically.
But first of all, some clarification is in order.
Setting aside the question of lecturers 'cheating' if they use AI to grade, not every form of assessment can be practically handled with AI.
If your assignments or exams are hand-written you very likely aren't going to be using ChatGPT.
Not unless you scan the papers one by one and upload them all to the system, which is hardly impossible but if you're expected to include marks or notes at the side this isn't going to be feasible.
Also, if the assessment is multiple-choice (amazingly, this format still exists in undergrad studies) you'd probably be better off getting your assistant or family member or an optical mark recognition (OCR) system to perform the grading for you.
More reliable and less hassle on your part.
Setting aside the question of lecturers 'cheating' if they use AI to grade, not every form of assessment can be practically handled with AI. — Reuters pic
What about soft-copy assignments? Sure, in that case CoPilot may help. But what about the grading rubrics? You're going to have to inform the AI regarding what counts as a Distinction, Pass and so on.
And are you confident of the nuances the system picks up? You know, often the difference between one assignment getting two or three marks more than another is the way the student explained something.
Would AI share your appreciation of these subtleties? This is also assuming the AI grades accurately and there are none of those nasty mistakes which we all know are made on more than a few occasions.
Another issue arises if students later ask you why they received only this or that (usually low) grade.
How are you going to explain the grade variations if AI read their papers on your behalf?
So, am I suggesting there's no value in using AI to grade papers? Not at all.
I think AI can be very helpful if you're required to read and give feedback on a 100-page post-graduate dissertation.
I honestly cannot see the harm of running the paper through Grok, getting some instant criticism and commendations, before running your own human eyes through the paper.
If you've attended the average Proposal Defence or Viva Voce session, you'll notice that most of the issues raised sound the same anyway. Eg, insufficient evidence when making an assertion, faulty research methodology, lack of citations on theory or history or what-not, glaring biases, the numerous spelling and grammar mistakes, and so on.
These are all valid issues and can all be picked up by the likes of Gemini and, frankly, I can't see an ethical problem if an examiner is helped along the way by AI, especially if he or she is required to read and review half a dozen 100-pagers within a month.
One could argue that by 'freeing' the professor from having to note down all these nitty-gritty errors and shortcomings, AI could thus enable him or her to provide 'deeper' and more profound theoretical criticisms and/or feedback.
So, to go back to the original question, my answer is 'Yes, sorta'. Lecturers should be able to use AI to help them grade (some!) papers the way graphic designers use Adobe Illustrator to beautify their work or how Grab drivers use Waze to help get them to a destination faster.
It gets less practical if your class has a hundred students but it can't hurt if your students are fewer and the work submitted is 'bigger'. Just gotta get the right balance, I suppose?
* This is the personal opinion of the columnist.