Generative AI for graduate students

Quick guide to the use of AI in graduate studies

Graduate student AI use decision tree (Note: An accessible version of this document is currently being developed)

Generative AI (GenAI) is rapidly becoming adopted and recognised as a powerful tool in scholarship, research, and creative activity, but it is critical that this use is appropriate and aligned with our institutional principles on the responsible use of genAI, and the norms and traditions of your field. Establishing comprehensive policies for the use of genAI in particular settings is challenging given how rapidly the tools, environment, and use cases are changing. When used ethically and appropriately, these technologies offer significant potential to augment the work of graduate students and have become a common technology in many workplaces. Inappropriate use may impact learning, critical reasoning and skill development, and can also be damaging or dangerous, for example, with the production of inaccurate or biased outputs, or the risk of violating intellectual property, privacy rights, and data security requirements.

The University has provided some initial guidance on the use of generative AI in teaching and learning, but the use of these systems in graduate studies has some unique implications that require additional guidance. Considering the potential benefits of responsible and effective use of genAI in research activities, and the expectation from future employers that our graduates will have the skills and knowledge to use genAI tools effectively and ethically, banning such technologies in graduate studies today is impractical, unenforceable, and is not recommended.

The University’s Graduate Degree Level Expectations require that graduate students engage in original application and creation of scholarship, research, and creative activities. These activities may be augmented, but should not be replaced by genAI. Overreliance on genAI in graduate studies may lead to superficial engagement with the domain of knowledge, reduced capabilities in critical thinking and writing, and poor develop of domain expertise that is the hallmark of graduate work.

Additionally, the outputs from graduate research (theses/dissertations, major papers, creative works) are typically publicly available, traditionally expected to represent the best work of individual graduate students, upholding values and standards of academic and research integrity, research ethics, and protection of legal rights such as privacy and intellectual property. Developing AI literacy in students is emerging as a necessity in higher education at all levels so that graduates from our programs have a sound basis on which to make responsible decisions about the effective and appropriate use of AI in their field.

Generative AI in graduate course work

Graduate level course work provides students with advanced learning in specialised fields of study. It is expected that graduates of these programs will be able to apply advanced knowledge to complex challenges in their field. Overreliance on genAI can hinder the development of deep knowledge and mastery of some learning outcomes, while for others, it may enhance learning.

Both students and instructors are encouraged to review and discuss how the ‘Key considerations to keep in mind when contemplating using genAI in graduate study’ applies to their own work. Students should also work through the ‘Graduate student AI use decision tree’ to guide their plans for using AI in their research and study. Faculties and programs should ensure that their graduate course syllabi include a statement on the use of genAI, as required in Senate by-law 55, and which provides for a spectrum of responsible AI uses.

Students are responsible for any AI-generated content submitted for assessment, and use of genAI must be in accordance with the guidance provided in the course syllabus. If unsure, you should discuss it with your instructor or supervisory committee.

Where genAI is approved for use in graduate courses, students should carefully and accurately describe the use and contribution of genAI tools in their work and consult with their supervisor (or course instructor) about the emerging disciplinary norms for these tools. These may include, but are not limited to:

  • searching and summarizing literature
  • brainstorming or outlining
  • rephrasing or reframing
  • drafting, editing, checking structure,
  • improving readability and accessibility
  • producing audio or visual content,
  • analysing and visualising data
  • transcribing interviews
  • coding
  • designing

Many more emerging uses may or may not be considered appropriate as the disciplinary norms continue to develop.

Use of Generative AI in research, thesis/dissertation writing and creative work

Consensus on the appropriate academic uses of genAI is evolving and varies from discipline to discipline, as well as by scholarly or creative task, which is why it is critical to discuss your ideas with your supervisor and committee. In some disciplines, AI is regularly used to support and enhance high quality scholarly activity. In others, applications of AI in academic work may be less clear or considered inappropriate. It is expected that graduate students and their supervisory teams will strive to achieve and uphold the highest standards of academic excellence, research integrity, and ethical conduct aligned with the norms of their disciplines. Supervisors should proactively discuss those norms with their graduate students.

Faculties and programs may wish to develop additional guidance for their graduate students that help them understand the discipline-specific nuances and appropriate uses of genAI in research, thesis, dissertation writing, and creative work.

Considerations for graduate students completing course, research or creative work

Use of genAI in graduate research and scholarly writing must be transparent and authorised by your supervisory team. Graduate students should ensure that they have unambiguous agreement in writing from their supervisory committee before using genAI tools that may impact their research and its outputs (see examples in FAQs). Failure to seek or receive this approval may be considered academic misconduct and may result in sanctions under the appropriate by-laws and Student Code of Conduct. Instructors must also ensure that their graduate course syllabi include a statement on the use of genAI, as required in Senate by-law 55 (which provides for a spectrum of responsible AI uses) to ensure that there is no ambiguity and students are aware of any restrictions on AI use specific to the course.

Where genAI is approved for use in graduate research and writing, students should ensure that they carefully and accurately describe the use and contribution of genAI tools in their research and scholarly work and consult with their supervisory team on the emerging disciplinary norms for these tools. For example, searching and summarizing literature, brainstorming, outlining, rephrasing or reframing, drafting, editing, checking structure, improving readability and accessibility, producing audio or visual content, analysing and visualising data, transcribing interviews, coding, designing, and many more emerging uses may or may not be considered appropriate at this point in time as the disciplinary norms continue to develop. It is the student's responsibility to ensure that they meet the expectations of their supervisory team and the University when using genAI. Ultimately, graduate students are responsible for any AI-generated content they choose to include in their thesis, dissertation, creative, or other academic work.

Considerations for graduate student supervisory teams

The supervisory team for graduate students is not expected to be experts in genAI, and familiarity and comfort with these tools varies widely across academia. Supervisors are encouraged to explore the emerging norms of their discipline and discuss those with their graduate students, recognising that this is a shifting landscape. It is expected that you will encounter use cases and issues for which there is no clear answer on the appropriateness or otherwise of using genAI. Modelling a critical and curious response to uncertainty and admitting that we don’t yet have all the answers is critical.

Some key considerations for supervisors include:

  1. Start with and maintain open dialogue with your students about AI. Help them think and talk through ways they are considering using it, when it might be beneficial and when it may not. You don’t need to be an expert in AI, but your deep disciplinary expertise will be invaluable to your students as they try to navigate the AI landscape.
  2. Learn as much as you can about how your discipline is approaching genAI, the emerging norms, remaining gaps and questions, current consensus on acceptable and unacceptable use and so on.
  3. Encourage your students to document any use of AI in their work. This will assist them in explaining and justifying that use (or non-use) later, and will help them identify patterns, where AI’s strengths currently are, where there are weaknesses, biases, and how the user input impacts outputs, as for any other technology.
  4. Discuss the risks of AI use as we currently understand them, for example, bias, inaccurate outputs, intellectual property, privacy, environmental concerns, labour concerns etc. Try to develop a shared understanding of ethical and responsible use in your student’s area of research.

Potential benefits of using AI in graduate studies

Artificial Intelligence systems are becoming increasingly deeply embedded in our technologies and workplaces. Many believe that AI literacies should now be considered critical skills for graduates to develop in preparation for the workforce and modern society. AI offers many potential benefits for graduate students and researchers when used responsibly and ethically. The key is to focus on ways to use AI that augment human capabilities, rather than replacing or diminishing them. Some possibilities include:

  • Efficiency: AI tools can support common tasks like literature reviews, data organisation and transformation, generating transcripts, data analysis, writing code, and even data collection in some circumstances.
  • Enhancing data analysis: One of the advantages of AI is that it can process vast data sets quickly, and may uncover subtle patterns, trends, and relationships that human researchers may miss. AI assistants are increasingly being embedded in common data analysis platforms (both quantitative and qualitative) and can assist with tasks like statistical analysis, recommending analytical approaches, data visualisation, interpretation of data, and a wide range of qualitative analyses.
  • Innovation and discovery: AI tools are accelerating the pace of development and discovery in many disciplines by identifying new and novel patterns and relationships in data or the extant literature. With the vast amount of data being published daily, AI can help to capture summaries and emerging trends across domains that individual researchers may struggle to keep up with, providing starting points or leads for human researchers to follow.
  • Disseminating academic work: One thing that AI is good at is transforming information for different audiences and use cases. For example, creating a lay version of a scientific paper, a media brief, social media posts, an infographic, poster elements, or presentation may be tasks that researchers struggle with where AI can assist. It can also help enhance the clarity of writing, format writing for the requirements of a publication, provide feedback against a rubric or set of criteria, improving accessibility of files, and more. In all cases, students should ensure they follow the specific expectations of the publisher, conference organisers, and supervisory team.

Personalised research support: AI models can be adapted to the specific needs of a researcher, whether that is for their particular research methodologies, data types, or even to support accessibility in the research and learning process. AI systems can also act as a critical friend, tutor, brainstorming muse, and assistant to augment the support they receive from their research colleagues and supervisory team. The complexity of support tasks that AI will be able to successfully complete as agentic AI becomes more common in the near future will likely change the way that research is conducted in many disciplines.

Risks and concerns of AI in graduate studies

Despite significant improvements and advances since ChatGPT was publicly released in November 2022, there remain a number of potential risks and considerations for graduate students using AI in their work. These include:

  • Data privacy and security: Students should never provide private, confidential, or sensitive information to AI tools without explicit consent. AI tools that have not been vetted by the University have unknown risks, including the risk of data breaches and misuse of information provided to them. Different types of data have different levels of risk when considering using AI tools. Students should discuss their data analysis strategies with their supervisory team to minimise and mitigate any potential risks.
  • Ethical concerns: Powerful technologies, such as AI, have the potential to cause harm when used inappropriately. There are legitimate ethical concerns about the potential impact of AI on society, labour, and human learning for example. There are also concerns about potentially unethical behaviour of AI companies that may be extractive or harmful to marginalised or vulnerable people, and their use of intellectual property to train or improve their models without explicit consent.
  • Accuracy, bias, and fairness: Despite rapid improvements, it is still possible for AI tools to provide inaccurate, biased, or hallucinated outputs based on the data they are trained on, the model choice, or the prompting strategy used. This can lead to unfair or discriminatory outcomes. Being aware of this potential and checking all outputs is an important part of using these tools responsibly. It is also a critical step when using increasingly common research methods such as generating synthetic data.
  • Environmental impact: The training and operation of commercial genAI models requires considerable energy, and the rapid expansion of these services means that overuse of AI will increase demand for electricity, as well as other resources such as critical minerals for components and water for cooling. Using AI responsibly means choosing when to judiciously use AI to help the most in your research. For example, using a chatbot in place of a traditional search engine uses much more energy and may not provide better results.
  • Transparency and explainability: While reliability of outputs from genAI systems is generally improving, and there are some moving towards explainable AI (xAI), it should be understood these systems are not intended to be deterministic, so by design they will produce variable results and the processes they use to provide an output are often unclear. This can make it difficult to interpret and trust results, particularly from generalist AI systems. Transparency in use of AI is also important to maintain trust – being able to explain how, when, and why AI was used, and how the outcomes were verified, are increasingly important elements of research.
  • Accountability: Accountability for AI use is the responsibility of the user. When AI systems make mistakes or provide inaccurate information, the user is responsible for the outcomes; checking and verifying AI outputs remains an important part of responsible use.
  • Data pollution: Students should be aware that malicious use of AI to create undetectable deepfakes and disinformation in digital sources may not only impact the legitimacy of information sourced online, including academic publishing, but also the behaviour of AI models. Relatedly, the reliability of anonymous AI surveys is increasingly being challenged by AI bots that can autonomously complete surveys and contaminate the data. 

Acknowledgements

These guidelines were adapted or informed by guidance provided by York University, The University of Toronto, The University of Calgary, Toronto Metropolitan University, The University of Washington, The Ontario Council on Graduate Studies, The University of Western Ontario, The University of Guelph, and the University of Waterloo. The guidelines were also reviewed and improved by members of the Faculty of Graduate Studies' Graduate Council and the APC Subcommittee on Generative AI. 

FAQs

Any use of AI in the development of a thesis or major paper should be agreed upon in writing with your supervisor and committee. Theses are intended to be original work contributing to the knowledge of your broader discipline. Students should always take care to ensure that the work they submit is their own and not the work of others, including AI. This guidance applies to all theses and major papers, whether at the graduate or undergraduate level.
AI is embedded in many of the common tools used to write theses and will often automatically offer suggestions for editing and improving text. While use of these common tools is generally OK, use of more advanced and specialised tools designed to write long form academic text with minimal input from the student is generally not acceptable. The same applies to other media such as figures, graphs, infographics, video, audio, or code. Ultimately, students are responsible for anything they submit, whether assisted by AI or not. Any editing done through AI should be declared in the written thesis.

Granting agencies are increasingly developing guidelines on appropriate use of AI in the development and evaluation of grants. For example, the Canadian Tri-Council Guidance on the use of Artificial Intelligence in the development and review of research grant proposals recognizes that “generative AI may be a valuable tool to applicants in the preparation of grant applications, including the potential to improve efficiency, assist non-native English and French speakers, and streamline the proposal writing process.” They say that genAI can be used in the development of grant proposals, so long as it is acknowledged and that it meets the specific requirements of the grant.

Granting agencies are increasingly developing guidelines on appropriate use of AI in the writing and review of grant proposals. The Canadian Tri-Council Guidance on the use of Artificial Intelligence in the development and review of research grant proposals strictly prohibits the use of publicly available genAI tools for the review of grant applications as a breach of the Conflict of Interest and Confidentiality Agreement for Review Committee Members, External Reviewers and Observers.

Students should be cautious about the intellectual property and data security implications of providing parts of their academic writing to third parties, including AI companies, and supervisors should never do so without the student's consent. Peer review and feedback are usually most effective when there is a level of trust and integrity applied to the process. Use of AI tools to generate feedback may erode that trust, especially when used without transparency. However, students may choose to use AI to provide them with feedback as part of the writing process which, when coupled with human feedback, may help to develop critical writing and analytical skills situated within the discipline.

Some research shows that overreliance on AI may hinder the development of critical skills and knowledge by effectively offloading cognition to the AI tool. It may also change the ability to undertake tasks independently. Learning new and advanced skills requires practice and feedback from the disciplinary community but may also include AI tools as they become more advanced, specialised, and disciplinary in their focus. There are now numerous AI tools that perform at or above the level of graduate students in many tasks, so harnessing these tools to support learning and potentially enhance student outcomes is another way to consider AI in graduate education. Graduate students may have a reasonable expectation that they will learn how to appropriately use AI in their field as part of their graduate education.

Many common analytics platforms now incorporate AI into their systems, so it is increasingly becoming unavoidable to use AI in some fields and research methodologies. For these fields, learning how to do so responsibly and ethically should be a goal of graduate education. Students should also be cautious at this point and carefully check all outputs from AI tools that are used in research to ensure their accuracy. Using specialist research and data analysis tools, rather than free or general chatbots, will generally provide both more reliable and safer outputs. Data privacy and security considerations are critical when deciding whether to use an AI tool for research purposes. There are many tools designed to support literature reviews, which may help to speed up the process, find literature that a traditional search may not, or help make novel connections between disparate domains of knowledge. However, these tools are still in their early days and may not have access to full range of academic materials that a scholar in a university typically has, so their outputs may include biases or information that does not make sense. Norms will vary by discipline, and some may allow or encourage the use of AI in the research process, so it is important for students to ensure they understand what is expected of them. Supervisors and supervisory committees should articulate to their students any tasks or methods that are appropriate for the use of AI assistance, and any that are currently not according to their disciplinary norms.

Even supervisors and supervisory committees who embrace the use of generative AI tools in research methods or other areas of their academic life may still wish to restrict their use in other aspects, such as writing or editing papers or theses. There must be clear guidance for graduate students on the degree (if any) to which engagement with gen AI in writing is acceptable. If there are specific tools that are (un)authorized, these should be explained. Graduate students must seek and document approval from their supervisors and committees for the use of generative AI in writing, even if they already have approval to use generative AI tools in their research.

Most journals and publishers are developing or have in place guidelines for the use of AI in their publications. The emerging consensus is that generative AI tools do not meet the criteria for authorship because they cannot take responsibility or be held accountable for submitted work, but they may be useful in enhancing writing, creating visualisations of data and so on. These issues are discussed in more detail in the statements on authorship from the Committee on Publication Ethics, and the World Association of Medical Editors, for example. When writing for publication, ensure that you understand and follow the guidelines of the publication with respect to the use of genAI.

Many common qualitative and quantitative data analysis tools are now embedding AI in their systems, so it may soon be unavoidable in many cases. Students should make an effort to understand how AI is contributing to analytical outputs and be aware of any potential risks such as biased or inaccurate results, incorrect application of analytical methods, or incorrect recommendations for analyses. These limitations should be acknowledged in academic work.

Yes. Graduate students must seek explicit written approval from their supervisor and supervisory committee before using AI in their research and scholarly work. If the student or supervisor are unsure about an application, they should contact their Chair, Associate Dean (Graduate), or the Faculty of Graduate Studies for guidance.

Acknowledging the use of digital assistance is best practice in any scholarly work. Use of genAI tools should be considered as part of the methodology or analytical tools that should be described in the research. Students should be transparent about how and when they use genAI in their research and scholarly activities and be prepared to discuss and defend the choice to use these tools. For guidance on how to cite AI in scholarly work, see the Leddy Library guide to AI for students.

There are ongoing copyright challenges for genAI companies related to their use of copyright materials for training their models. Most commercial AI companies allow user data to be retained and used in developing their products, so students should exercise caution when considering sharing their intellectual property with an AI company. Confidential, private, or commercially sensitive data should not be shared with third party AI tools that have not undergone a privacy, security, and accessibility review.

The University does not currently licence any AI tools specifically designed for research purposes.  Microsoft Copilot for the Web is included in the University’s Microsoft 365 licence and available to grad students, but it is a generalist tool and may not be useful for many research activities.

Researchers, including students are responsible for any AI-generated content included in or influencing their research output.

The University does not consider use of genAI in the creation of content to fit the definition of plagiarism as that involves representing the work of another person as your own. However, representing AI-generated content as your own intellectual work may be considered academic or research misconduct.

 As genAI is becoming more and more powerful and embedded in everyday tools, the range of tasks it can assist with is growing rapidly. At present, genAI is good at interacting with humans in human-like conversational manner, summarising large amounts of text, finding subtle patterns in large datasets, creating drafts or outlines of topics, computer coding, co-generating ideas, creating or drafting human-like text in any style required, translating text to other formats or languages, generating images, audio and video, suggesting novel connections between knowledge domains, and even solving complex math problems. The possibilities are rapidly changing so there is no definitive list of what is best practice.

AI tools can be considered as assistive technology in many cases. Students with disabilities should work with their advisor in Student Accessibility Services and their supervisor to ensure that any planned use of genAI for an accommodation is appropriate.