YSGPS Guidance on the Use of Generative Artificial Intelligence (GAI) in Graduate Studies
This guidance addresses common questions about the use of generative artificial intelligence (GAI) in graduate student scholarship, research, and creative activities, in accordance with TMU Senate policies. It aims to support graduate students, supervisors, committees, program directors, and faculty involved in graduate education.
GAI tools use predictive technology to generate text, images, audio, or other synthetic data. They have become more accessible and can produce high-quality results quickly. In April 2025, TMU launched Google Gemini, a licensed GAI tool for all faculty, staff, and students. Unlike free tools like ChatGPT, Gemini does not collect prompts for training and keeps interactions within TMU’s Google Cloud.
Most discussions about GAI focus on teaching, learning, and assessment in classrooms. For undergraduates, these debates are vital because they shape how curricula across disciplines integrate GAI. TMU has published Principles and Guidelines on Generative Artificial Intelligence in Learning and Teaching, along with resources on Generative AI and a 2025 guide on Gemini. At the graduate level, GAI has unique implications for research, scholarship, and creative work. In 2025, TMU struck a GAI Leadership Task Force, to guide the university in “navigating the opportunities and challenges presented by rapidly evolving AI technologies” across all university functions, including teaching and learning, SRC activities, student experience and administration and operations. Many Canadian universities have (word file) policies on the use of GAI tools, some of which are specific to graduate studies.
As GAI’s opportunities and challenges evolve, consistent guidance and principled approaches are crucial. Using GAI tools in graduate research and activities presents both prospects and concerns.
Disciplinary norms are expected to continue to evolve for the use of GAI in research. In the meantime, graduate programs and supervisors are expected to provide clear guidance to their students regarding the acceptable extent of engagement with GAI in writing and in scholarly, research, and creative activities. This guidance is crucial for helping students navigate the ethical and academic considerations of using AI in their work. The information in the sections below will guide programs in what to consider when developing discipline-specific guidelines or policies for students and supervisors on the use of GAI tools in their research. In addition, program faculty may find the resources provided below and the (word file) statements from other Canadian universities helpful.
Students should be informed about which methods are acceptable and which are not within their respective programs. Additionally, it is advisable that supervisors, if unsure about the suitability of a specific GAI application in research, seek clarification from program directors or Chair/Directors to ensure compliance with program guidelines.
The Ontario Council of Graduate Studies (PDF file) Considerations for Graduate Research (external link) document includes helpful “Conversation Starters” for supervisors and graduate students on the potential use of GAI in graduate research, writing and scholarship. Conversations between supervisors and graduate students on the use of GAI should be ongoing, and ensure a clear mutual understanding of authorized use and any limits on use. The (PDF file) YSGPS Student-Supervisor Checklist will also be updated to include discussions about the use of GAI as part of the initial student-supervisor agreement and as part of regular check-ins.
While some graduate programs or supervisors may encourage the integration of GAI tools into research methods, they may also impose limitations on their application in other aspects of writing, such as papers and theses.
Graduate students considering using GAI in any aspect of their studies should understand the benefits and risks. Some of the potential risks are highlighted below. They must actively seek and document approval from their academic supervisors and programs before incorporating GAI into any aspect of their SRC and writing processes. This not only ensures academic integrity but also promotes transparency in AI usage. Finally, full transparency about the use of GAI tools in their published work (including MRP reports, theses, and dissertations) is required.
Students must also bear full responsibility for any AI-generated content they choose to include in their thesis or major research paper (MRP). Additionally, the evaluation of their final oral examination is not based solely on the submitted written document; their performance during the oral examination is equally important. During this examination, students must explain and defend their use of GAI and the content within their thesis or MRP.
Moreover, students must demonstrate mastery of all program-level learning outcomes, which typically involve communicating ideas, issues, and conclusions effectively. These learning outcomes are designed to equip graduates with the skills needed for successful employment and clear communication. Overreliance on GAI tools can hinder the mastery of some learning outcomes.
The use of GAI tools by graduate students must be conducted with complete transparency while upholding the fundamental values of academic integrity, including honesty, trust, fairness, respect, responsibility, and courage (Policy 60: Academic Integrity). Full transparency in the use of GAI tools in graduate student research includes shared responsibilities among graduate students, supervisors, committee members, and others who access or review graduate student research and scholarship. These responsibilities include open and explicit discussion; explicit, unambiguous approval and advance agreement on the specific applications of GAI tools; and the integration of clear, detailed descriptions of GAI tools use into graduate student research, creative activities, theses, and/or other scholarly writing or activities. Additionally, these responsibilities include appropriate acknowledgement and citation of GAI tools used in research, creative processes, or writing, and ensuring alignment with disciplinary or program norms. Furthermore, programs or disciplines may have specific local norms or guidelines for using GAI tools in any aspect of scholarship, research, or creative activities, and graduate students and supervisors should be aware of any relevant discipline- or program-specific additional guidance or requirements.
Using and describing the use of GAI tools in published work
If a graduate program permits the use of GAI tools in research, it should specify how students describe and reference their use. Some things to consider when describing their use include whether to provide excerpts of prompts and responses or the full text of their interactions with the GAI tools in an appendix.
While many citation style guidelines recommend treating GAI tools as personal communication, some are beginning to include specific guidance on how to cite them. For example, see the American Psychological Association guidance (external link) and the MLA Guidance on Citing Generative AI (external link) .
Most major journals and scholarly publishers now have policies regarding the use of GAI in publication (see below for a selection). These policies vary widely, and researchers must ensure they adhere to the specific policies of the preprint server, journal, or publisher to which they are submitting. For example, some publishers allow GAI in the research process, with appropriate descriptions, references, and supplementary material to show the interaction with the AI tool, but do not include AI-generated text. Others allow the inclusion of AI-generated text but not images.
The emerging consensus is that GAI tools do not meet the criteria for authorship of scholarly works because they cannot take responsibility or be held accountable for the submitted work. These issues are discussed in more detail in the statements on authorship from the Committee on Publication Ethics (external link) and the World Association of Medical Editors (external link) , for example.
Graduate programs, supervisors, and students must be familiar with and adhere to the field-specific requirements regarding authorship and the use of AI in works submitted to preprint servers or for publication.
Lastly, the principles governing the use of GAI tools for text production or editing also extend to creating or modifying figures, images, graphs, sound files, videos, or other audio-visual content. It is worth noting that specific publication policies, such as Nature's editorial policy on AI authorship and use of AI-generated images (external link) , may impose stricter criteria or prohibit the use of AI-generated content in certain contexts.
Current TMU policies and guidelines related to the use of GAI tools
TMU has published Principles and Guidelines on Generative Artificial Intelligence in Learning and Teaching and offers (google doc) Generative AI (GenAI) Resources (external link) . With the implementation of Google Gemini, the university also provides a guidance document on the use of TMU’s licensed GIA tool, Google Gemini. While TMU’s Gemini is considered a safer alternative to other GAI tools, because the Terms of Service state that activities or data from TMU users will not be used to train Gemini AI models, the principles of ethical, responsible, and transparent use remain relevant.
Graduate students who include AI-generated content in their own academic writing risk including plagiarized material or someone else’s intellectual property. Since students are responsible for the content of their academic work and their scholarly, research and creative activities, including unapproved or unauthorized AI-generated content, may violate TMU policies such as Senate Policy 60: Academic Integrity, Senate Policy 118: SRC Integrity, or other University policies. More information on the unauthorized or unapproved use of GAI tools and potential links to academic misconduct is available in the Academic Integrity Office's Artificial Intelligence FAQs.
Use of GAI detection tools
Many tools are emerging to detect the use of generative AI. However, controlling AI-generated content through surveillance or detection technology is not recommended; AI will continue to learn and, if asked, will help avoid the very features its own architecture uses to detect it.
TMU currently does not endorse any tools for detecting AI use. While tools and research are available to identify AI-generated text, they are not yet reliable and may raise privacy concerns.
While there are opportunities for using authorized GAI tools in graduate studies, a critical awareness of their limitations and potential biases is essential to ensuring the integrity of academic research. Graduate students and faculty members should be aware of potential privacy issues, inaccuracies and biases (external link) , and concerns about the novelty of work that uses this technology.
Concerns about privacy and confidentiality
Users of GAI tools should be aware of potential privacy issues, inaccuracies and biases (external link) . These tools may also not produce any original or novel content needed for research. Privacy concerns have been raised about the data processing used to train new GAI tools and the (mis)information they provide about individuals or groups.
For graduate student researchers working with certain types of data, using third-party GAI tools to process it may entail additional privacy and security risks. For example, students working with data from human research participants must not submit any personal or identifying participant information, nor any information that could be used to re-identify an individual or group of participants, to third-party GAI tools, as these data may then become available to others, constituting a major breach of research participant privacy. Similarly, students working with other types of confidential information, such as information disclosed as part of an industry partnership, must not submit these data to third-party GAI tools, as this could breach non-disclosure terms in an agreement.
Students wishing to use the GAI tools to process such data must have documented appropriate permissions, such as explicit approval from a Research Ethics Board or an industry partner.
Concerns about bias, accuracy and novelty
GAI tools may produce content that is incorrect or biased. In addition, GAI tools can reproduce existing biases in the content they are trained on, include outdated information, and present false statements as fact. Students remain responsible for the content of their SRC output, regardless of the sources used.
GAI tools are also predictive and may not generate the type of novel content expected of graduate students in specific programs, nor synthesize existing knowledge to reveal the need for the novel contribution of the research underlying a graduate student's thesis.
Cultivating disciplinary scholarly writing practices is a fundamental aspect of graduate education. While GAI tools may assist with writing tasks, they may hinder the development of essential writing skills that require extensive practice. Over-reliance on AI to alleviate writing burdens may, in the long run, undermine the development of invaluable writing skills, potentially impacting the academic growth of graduate students.
Potential copyright and intellectual property infringement
Researchers, including graduate students, must exercise caution when using GAI tools, as some uses may infringe on copyright or other intellectual property rights. Similarly, providing data to an AI tool may complicate future attempts to enforce intellectual property protections. GAI may also produce content plagiarizing others’ work, failing to cite sources or make appropriate attribution.
It is important to note that regulations and data laws differ across jurisdictions, and that liability and ownership of input or generated output data may not always be clear as legal systems respond to the changing landscape of generative AI use.
Additional TMU resources can be used to learn more about GAI tools, including the Principles and Guidelines on Generative Artificial Intelligence in Learning and Teaching and the (google doc) Generative AI (GenAI) Resources (external link) , both developed by the Centre for Excellence in Teaching and Learning (CELT) at TMU.
In addition to TMU resources, many external resources and guidance are available on using GAI tools for (word file) postsecondary education and graduate studies in Canada. Other examples of helpful resources include the University of Massachusetts Resource Guide on Artificial Intelligence (external link) , the University of Washington guides to AI (external link) use in healthcare studies and across other disciplines, and the McGill University Library Guide to Artificial Intelligence (external link) , which includes guidance on using AI tools in research, on developing AI literacy and on citing AI.
Another essential resource is the HESA Observatory on AI Policies in Canadian Post-Secondary Education (external link) , which tracks policies and guidelines for the use of GAI in Canadian and international Universities.
Publisher Resources on AI Use
- ACM: Policy on Authorship (external link)
- Elsevier: Generative AI Policies for Journals (external link)
- Sage: Artificial Intelligence Policy (external link)
- Springer Nature: Artificial Intelligence (external link)
- Taylor & Francis: AI Policy (external link)
- Wiley: Best Practice Guidelines on Research Integrity and Publishing Ethics (external link)
- IEEE Submission Policies (external link)
This guidance was prepared and revised, using several (word file) resources, with a focus on the Guidance on the Appropriate Use of Generative Artificial Intelligence in Graduate Theses (external link) produced by the School of Graduate Studies at the University of Toronto, the Generative AI FAQ (external link) published by the University of Waterloo, and the University of Alberta Generative AI and Graduate Education, Supervision and Mentoring (external link) .
Revised 09.02.2026