Speak with your instructor before using generative AI tool, such as ChatGPT, to help complete assignments.
If your instructor has not specified you may use ChatGPT or other generative AI technology, or has specifically stated you cannot use these tools, using these tools to complete a portion or whole of your assignment will be considered academic misconduct.
If your instructor has permitted the use of generative AI tools, make sure you understand exactly what is permitted for a specific assignment
GenAI tools, like ChatGPT, are really good at creating or producing new content, from quick answers to short stories or even complete essays. Unfortunately, not all the information from these tools is correct or accurate. It’s important to remember that GenAI does not understand the content it creates, it is only generating information based on its training material and your input.
This page will highlight some common areas of concern for GenAI tools. By understanding these limitatons, you’ll be better equipped to recognize when the content might be misleading, incomplete, or inappropriate for academic use.
GenAI does not always include citations with their answers. When they do, they’ve been known to create citations to sources that do not exist (often the researchers will be real people, but the article does not exist, or vice-versa). This is referred to as AI Hallucination. AI may also combine real-sounding facts with unrelated contexts. All of this is incredibly risky in academic writing, where sources must be verifiable.
Not crediting sources of information used and creating fake citations are both cases of plagiarism, and therefore breaches of Academic Integrity.
It's important to also remember that most generative AI tools do not have access to text of articles behind paywalls. This means it only has access to abstracts and citation information for many academic articles, including ones found through the JIBC Library and Google Scholar.
While the content generated by AI tools may sound accurate, it can be factually wrong. This is known as "AI Hallucination". Generative AI can also be used to create fake images or videos so well that they are increasingly difficult to detect, so be careful which images and videos you trust, as they may have been created to spread disinformation.
GenAI is often trained against data scraped from the internet. This means that it may have learned from biased sources of information and may generate content that reinforces stereotypes, omits diverse perspectives, or presents skewed information. Potential biases include gender, racial, cultural, political, and religious biases, amongst others.
AI content may be selective as it depends on its training materials to generate responses. Although GenAI often has access to huge amounts of information, it may not be able to access subscription-based information. Content may lack depth, and be vague, full of cliches, repetitions, and even contradictions. Also, it may oversimplify complex topics or provide incomplete overviews, especially in academic fields where depth and precision matter. This creates concern for misunderstandings and shallow analysis.
GenAI creates content based on data sets. The information they have access to can only be as current as the most recent dataset the tool was trained on, not the dataset that is most currently available. Therefore, content generated can be limited to a specific time and outdated or just wrong.

Unless otherwise noted, this guide is licensed under a CC BY-SA 4.0 (Creative Commons Attribution-ShareAlike 4.0 International License).
The information on this page was adapted with permission from KPU's Artificial Intelligence LibGuide, created by Ulrike Kestler.