You are now in the main content area

Artificial Intelligence FAQs

The increased availability of AI writing and design tools has raised a number of questions for instructors and students alike. Recognizing that the pace at which these technologies are developing, as well as the infancy of established best practices for AI in postsecondary education, below are the university’s current recommendations in the form of questions and answers.

Instructors who'd like to learn more about AI in higher ed are encouraged to review the Centre for Excellence in Learning & Teaching's new Teaching Resources page, join our community of practice (external link)  (CoP) and/or to explore the faculty-led video presentations that have taken place during our CoP Lunch and Learns.

Unless explicitly stated by the instructor, students should assume that using AI to complete assessments is prohibited. If students use AI to complete assessments, instructors may consider it 1) misrepresentation of personal identity or performance and/or 2) plagiarism and/or 3) cheating.

For more detailed information, please see our in-depth resource:  (google doc) FAQs: Academic Integrity and AI Use at TMU (external link) .

Policy 60 defines academic misconduct as  "Any behaviour that undermines the university’s ability to evaluate fairly students’ academic achievements, or any behaviour that a student knew, or reasonably ought to have known, could gain them or others unearned academic advantage or benefit, counts as academic misconduct." (Section 3.1)

Additionally, Senate has approved the following changes to Appendix A of Policy 60, specifically, under the category of "Misrepresentation of Personal Identity or Performance":

5.5. submitting work created in whole or in part by artificial intelligence tools unless expressly permitted by the Faculty/Contract Lecturer;

5.6. submitting work that does not reasonably demonstrate your own knowledge, understanding and performance

The submission of false or fabricated material, such as data generated by artificial intelligence tools, may also be considered as cheating under Policy 60, Appendix A, Section 3.7, "presenting falsified or fabricated material, including research results".

In either case, yes, but be mindful of a few things:

Tool (version). Prompt: “Prompt.” Retrieved from url on date retrieved.

For example,

OpenAI (version 4.0). Prompt: “How should I cite text generated by OpenAI?” Generated at https://beta.openai.com/playground on December 6, 2022.

Not unless the student receives permission from the instructor.

Grammarly, Quillbot, ChatGPT, ParaphraserAI, DeepL Translator, Google Translate, OpenAI Playground

Follow the  (google doc) formal process (external link, opens in new window) , external link for academic misconduct concerns. The heart of this process is a non-adversarial conversation between the student and the instructor where the instructor will be able to assess the student’s understanding of their work and educate the student on appropriate vs inappropriate use of AI.

No. There are a number of tools available as well as a significant amount of research dedicated to effectively watermarking AI generated text, but the tools are unreliable and potentially raise privacy concerns.

For more on AI detection, see:

Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023) A Watermark for Large Language Models. arXiv:2301.10226.

Kirchner, J., Ahmad, L., Aaronson, S., & Leike, J. (2023, January 31). New AI classifier for indicating AI-written text. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text

Mitchell, E., Lee, Y., Khazatsky, A., Manning, C., & Finn, C. (2023). DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. arXiv:2301.11305.

 

 

AI can be leveraged to improve the quality of your feedback for student assignments. However, it's important to ensure that you're aware of the data privacy policies for any external tools intend to use and that you make sure you're not entering students' sensitive information or personal identfiers.

The Centre for Excellence in Learning and Teaching will provide updates and resources for faculty through their new Generative AI (opens in new window)  page, and is maintains a curated list of  (google doc) ChatGPT/AI resources (external link) . (opens in new window) 

The AIO provides presentations and workshops on AI Literacy, Leveraging AI in the Classroom, and Mitigating the impact of AI on Student Assessment. If you'd like to request a presentation, please complete  (google form) this form (external link) .

Additionally, a community of practice has been formed to share resources and discuss best practices. If you'd like to join the community of practice, you can do so  (google form) here (external link) .

The AIO also encourages faculty to explore these tools, test them, learn about how they work and what their limitations are, so that they're better equipped to address these matters in the classroom.

Sample syllabus statement suggestions:

  • “If you submit work that doesn’t reasonably reflect your knowledge of the material and/or the skills being assessed, that work will be considered to be in breach of Policy 60: Academic Integrity.”
  • “The unauthorized use of Generative AI (e.g., ChatGPT, Quillbot, Grammarly, Google Translate) is prohibited and will be considered to be in breach of Policy 60: Academic Integrity.”
  • “Generative AI may only be used for idea generation or as a study aid, but not for the creation of submitted work.”
  • “Falsified citations or misrepresentation of source material will be considered a breach of Policy 60. You are responsible for the accuracy of the work you submit.”