Universities and colleges use tools to detect ChatGPT usage. Research shows that tools are far from doing their job well enough.
Artificial Intelligence (AI) and ChatGPT are making a serious entrance into higher education in the fall of 2022.
In the future, it was mostly about how universities and colleges dealt with the new tools, especially when it came to the chances of cheating during exams or other submissions.
With the increase in problems and challenges, so-called AI content detectors (Artificial Intelligence Detectors) have been developed, which can be used to detect text generation, for example, chatbot ChatGPT.
Not accurate or reliable
Now, a research group made up of experts from several major European universities has concluded that AI detectors are performing poorly, write Times Higher Education.
“The available tools are neither accurate nor reliable, and often classify text typed by ChatGPT as more human-made than the other way around.” , Mentioned, among others, in the research article.
A side article associated with the project also concluded that students with a language disability and poor vocabulary are often punished disproportionately by AI detectors for using ChatGPT without doing so.
Another research studyPrepared by researchers at the University of Maryland in the United States of America, it was found that AI detectors can be easily overcome by students with correct knowledge.
The research otherwise shows that some AI detectors perform better than others, but there is relatively little difference overall.
According to the law professor who was involved in the research, Michael Draper, higher education should consider taking robust measures as a result of the research study findings.
One suggested procedure is that students should always give an account of how they worked and thought during the process of writing the dissertation that will eventually be submitted.
He stresses that AI detectors, even if they don’t work well, should continue to be used. He also made the suggestion that technology could be developed to mark AI-generated text with an invisible watermark in the writing itself, which new and better detectors could detect.
“Web specialist. Lifelong zombie maven. Coffee ninja. Hipster-friendly analyst.”