Corporate Author. (DATE). Program Title (Version) [bracketed description of tool]. Web address.
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
"Description of the description given the AI Tool" prompt. Program Title, version, Corporate Author, Date, Web address.
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
AI Tool, response to " prompt inserted here," Coporate Author, Date, [optional URL].
ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” OpenAI, March 7, 2023.
Like many other forms of academic misconduct, it is important to think of ways to mitigate academic misconduct efforts during all parts of curriculum design and execution. How can assignment design, course structure, assessment, and tests operate in a way that is conducive to making things fair for legitimate student work?
There are certain tools available that can offer some detection capability against AI written works. None are fully accurate, and may misidentify human text as AI, and AI text as human at times.
"First and foremost, DO NOT rely solely on AI-text detection software to catch student usage. These tools are notoriously unreliable, providing large numbers of false positives and false negatives. The absolute best these detectors have performed is to correctly identify AI-generated-text 80% of the time. That means it’s wrong on one paper out of every five it looks at!" (source) Be aware that false positives can affect non-native speakers of English especially potently.
You may find this article helpful in identifying other methods to detect AI: Detecting AI-Generated Text: Things to Watch For
All AI detection tools perform imperfectly, and may make mistakes. It can often be a good idea to test a suspicious image on multiple platforms, to see if there is consensus. Even this can sometimes be misleading. It is important to bring in context and other evidence when making such judgements.
As AI Tools like ChatGPT have gained widespread notoriety and utility, it can be wise for professors to directly address how it should be used in their course. Should it never be consulted? Only used in research and not writing? Only be used for light editing? Used in anyway the student deems useful? Different classes will require distinct policies. Professors can reduce uncertainty and pave a level playing field by making their classes' policies towards AI Tool explicit by addressing it directly in the syllabi.
If you have a course where AI Tools have distinct applications other than aiding in essay writing- such as Visual Arts or Computing Science, it may be worthwhile to discuss how the policy relates to your type of coursework specifically.
For some sample Syllabus language on ChatGPT and AI Tools, feel free to consult the samples from other universities below:
© , University of the Fraser Valley, 33844 King Road, Abbotsford, B.C., Canada V2S 7M8