Corporate Author. (DATE). Program Title (Version) [bracketed description of tool]. Web address.
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
"Description of the description given the AI Tool" prompt. Program Title, version, Corporate Author, Date, Web address.
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
AI Tool, response to " prompt inserted here," Coporate Author, Date, [optional URL].
ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” OpenAI, March 7, 2023.
Generative AI tools like ChatGPT can be extremly useful in the research process. Consider using it as a...
1) Research Assistant : One of the most useful ways of thinking of ChatGPT and similar tools is like having a research assistant. You can direct them to sort through huge amounts of documents and information and bring the most relevant parts to you. They will sometimes misinterpret you, but this allows you to clarify your directions and send them right back out again. Sometimes they will miss things, or write totally incorrect conclusions. Like a research assistant, they will do most of the work- but you're their supervisor, and verifying the information they bring you and spotting obvious gaps are your responsibilities. You're in charge- they're just your assistant.
2) Sounding Board : We've all had an experience where we were stuck-not sure what keywords we needed to search next, trying to figure out a new angle of analysis, or uncertain about how to organize a set of ideas. One of the most common things to do in such situations is to explain it to a friend or colleague. Sometimes just the act of explaining it will trigger new ideas. Sometimes the person you're talking to will offer insightful questions, tips, or directions on what to explore next. ChatGPT and other generative AI tools with chat functions can be extremely helpful in this role, acting as a sounding board for your ideas. Again, the responsibility for judging their ideas is yours.
Made-up Information : Many AI Tools, such as ChatGPT occasionally generate entirely fake information that they will confidently assert is true. Programmers call these "hallucinations". They can be citations or facts that do not actually exist. If you are going to rely on a piece of information found in AI Tools, it's generally a good idea to confirm its existence is attested to elsewhere. This simple process of double-checking facts with external sources can assure you that AI Tools have not led you astray, especially on critical matters.
Data Bias : Like any tool dependent on data, most AI Tools face the same biases as the data they received, much of which is Western in perspective or English in language. For instance, if you ask ChatGPT "What is the highest grossing movie about the Korean War?" it might respond with M*A*S*H instead of The Battle at Lake Changjin (a far more profitable Chinese film) because it has consumed considerably more English language sources than Chinese ones, and assumes its audience is more interested in Hollywood films.
Lack of New Information : Many AI Tools like ChatGPT are based on large language models tend to have limited abilities to comprehend recent events or new information they were not trained on during their development. Asking questions about news or recent events is likely to be unfruitful. Out-of-date information may also be provided.
Data Privacy : Many AI Tools use the information provided them by users to further their own development. Accordingly, it can be risky from a data privacy perspective to provide them with sensitive or private data. This is especially true when you have other people's private data that you are entrusted with safekeeping. Most Generative AI models also keep their data in foreign servers, which have regulatory risks for using in some contexts.
Moral Constraints : Many AI Tools limit the capabilities of their application to prevent it from being used in ways the company believes is immoral or sensitive. ChatGPT for instance will refuse to help if it thinks a user request promotes hateful stereotypes, criminal activity, academic misconduct, suicide advice, sexual content, medical prescription, or stock shorting. While many of these are admirable goals, many ordinary users with good intentions may find they do not agree with the values interpreted by the AI Tool, or find that guardrails are preventing them from doing uncontroversial work as well.
© , University of the Fraser Valley, 33844 King Road, Abbotsford, B.C., Canada V2S 7M8