Skip to Main Content

AI Tools

Responsible Use of AI Tools

ChatGPT and other AI Tools can convincingly write like a human. They can help you brainstorm essay outlines, reword phrases, and write filler text. However, most writing assignments at UFV will generally want to see demonstrations of your own writing skill. Always check with your professors about what role they see for AI Tools in their classroom. Some will allow more liberal usage, while others might ban consulting them all together. Others may have more nuanced views. It is always important to understand the expectations of each class's professor, so you will not inadvertently commit academic misconduct. 

Chat/Writing Tools

Risks of AI Tools

Made-up Information : Many AI Tools, such as ChatGPT occasionally generate entirely fake information that they will confidently assert is true. Programmers call these "hallucinations". They can be citations or facts that do not actually exist. If you are going to rely on a piece of information found in AI Tools, it's generally a good idea to confirm its existence is attested to elsewhere. This simple process of double-checking facts with external sources can assure you that AI Tools have not led you astray, especially on critical matters. 

Data Bias : Like any tool dependent on data, most AI Tools face the same biases as the data they received, much of which is Western in perspective or English in language. For instance, if you ask ChatGPT "What is the highest grossing movie about the Korean War?" it might respond with M*A*S*H instead of The Battle at Lake Changjin (a far more profitable Chinese film) because it has consumed considerably more English language sources than Chinese ones, and assumes its audience is more interested in Hollywood films. 

Lack of New Information : Many AI Tools like ChatGPT are based on large language models tend to have limited abilities to comprehend recent events or new information they were not trained on during their development. Asking questions about news or recent events is likely to be unfruitful. Out-of-date information may also be provided. 

Data Privacy : Many AI Tools use the information provided them by users to further their own development. Accordingly, it can be risky from a data privacy perspective to provide them with sensitive or private data. This is especially true when you have other people's private data that you are entrusted with safekeeping. Most Generative AI models also keep their data in foreign servers, which have regulatory risks for using in some contexts. 

Moral Constraints : Many AI Tools limit the capabilities of their application to prevent it from being used in ways the company believes is immoral or sensitive. ChatGPT for instance will refuse to help if it thinks a user request promotes hateful stereotypes, criminal activity, academic misconduct, suicide advice, sexual content, medical prescription, or stock shorting. While many of these are admirable goals, many ordinary users with good intentions may find they do not agree with the values interpreted by the AI Tool, or find that guardrails are preventing them from doing uncontroversial work as well. 

The University of the Fraser Valley is situated on the traditional territory of the Stó:lō peoples. The Stó:lō have an intrinsic relationship with what they refer to as S’olh Temexw (Our Sacred Land), therefore we express our gratitude and respect for the honour of living and working in this territory.

© , University of the Fraser Valley, 33844 King Road, Abbotsford, B.C., Canada V2S 7M8