Principles for the responsible use of Generative Artificial Intelligence in academic practice
Generative Artificial Intelligence (generative AI) systems are powerful digital tools that are rapidly changing the landscape of research, teaching, and learning as they become increasingly embedded in all aspects of work. These systems have the potential to revolutionize many areas of higher education, but also come with risks and ethical considerations that must be carefully weighed against the potential benefits of use. Choosing if and how to use these tools requires careful, critical consideration that will vary by use case and discipline.
These guidelines generally seek to identify acceptable uses of generative AI, rather than specifying what is prohibited. The following general principles apply to the University community as they consider the use of generative AI technologies in their work:
Users must take responsibility for verifying the accuracy of inputs and generated outputs of generative AI tools, decisions that are informed by the generated content, and the appropriateness of the purpose for which the tool is used.
Consider: Is it necessary or advantageous to use generative AI for this task? Is use of generative AI acceptable for this task? What are the risks and potential benefits and how do those balance out?
Users should actively engage in learning about the strengths and limitations of generative AI, and commit to continuous learning about the field, which is rapidly developing and changing. Users should seek to develop literacy in these tools, including understanding responsible, ethical, and effective use of the tools.
Consider: Do you know enough about the topic to identify errors and inaccuracies in responses? Do you know enough about the AI tool you’re intending to use to explain how you used it to a colleague or student?
The use of AI tools and content generated by them should be informed by ethical practice applied to the choice of tool and the application for which it is used. The integrity of the academic mission of the University depends on ethical use of technology. Users should preferentially seek and utilise AI tools that have a demonstrated commitment to harmless, ethical, and safe applications, transparency, and advancing positive outcomes for humanity and the planet.
Consider: Do you know enough about the company and their practices for training their AI models to make an informed decision about whether they are acting ethically? What is their foundation model transparency index score?
Users should be aware of and take steps to minimise the impact of biases extant in generative AI tools and their outputs, and the potential for these tools to amplify existing biases. Additionally, equity of access to these technologies should be considered, along with how the tool uses the data of its users.
Consider: Does the AI tool you plan to use have known examples of bias? Does the company actively try to reduce or minimise harmful biases in their product before releasing it? Have you tried asking questions related to stereotypes to see what responses you get? What does the end user or licence agreement say about how the company can use your data? Do you get better results with a paid subscription over a free version, meaning that those who can afford to pay have an advantage over those who cannot? How might use of this tool exclude or discriminate against groups of people?
Where used in the pursuit of our academic mission, generative AI tools and their outputs must comply with the Accessibility for Ontarians with Disabilities Act (AODA).
Consider: Does the company have an accessibility statement or review available? Does it work in an equivalent way on mobile devices? Have you checked the user interface with a free web accessibility checker such as WAVE?
Wherever generative AI is used, users must clearly acknowledge the extent and type of use. Wherever synthetically generated content is presented, it must be apparent that generative AI was used in the process of developing that content. Additionally, users should preferentially choose generative AI tools that demonstrate a commitment to transparency in their own practices, including transparency in the development and training of the foundational models that underlie them.
Consider: Would you be comfortable describing to colleagues how and why you used AI to support this work? Have you reviewed the relevant AI-related policies for to the academic work you’re doing (e.g. the journal/publisher policies, the granting agency policy, your syllabus, etc.)? Where AI is considered acceptable to use, do you know how to appropriately acknowledge its use? Does the company who created the AI tool have a commitment to transparency? What is their foundation model transparency index score?
The use of generative AI has the potential to significantly impact environmental sustainability and the University’s carbon footprint as a result of the energy, water, and other resources needed to train and run these tools. As with all finite resources, users should be educated about the impact of the use of those technologies on sustainability, and should use the services judiciously. Wherever possible, users should preferentially select tools and models that have demonstrated a commitment to sustainability.
Consider: As each interaction with an AI tool uses energy, water, and mineral resources, have you considered whether you are using the AI tool in the most efficient way, with the least number of prompts, and at the point in your work where it will be most helpful? Does the company who created the AI tool publicly share their energy usage for model training? Does the company share how they are attempting to minimise their environmental impact? Do you understand the potential environmental impact of using this technology?
Use of generative AI must comply with all applicable laws and regulations (e.g. FIPPA, the Ontario Human Rights Code, the Copyright Act, the new Federal Government Policy on Sensitive Technology Research and Affiliations of Concern etc.). Users should make reasonable efforts to ensure that tools they adopt are not breaching any laws, and consider the liability that they or the University may incur as a result thereof.
Consider: What does the company’s Privacy Policy and End User Agreement say (have you read it, have you sought help to interpret the policy)? How are they able to use your data? Is it possible that use of the tool would impact the rights of others?
Employees and students should not upload or share confidential, personal, private, health or proprietary information with a generative AI tool unless a privacy and data security risk assessment has been completed for the specific tool. Do not upload work or data that is not your own to a generative AI tool.
Consider: Is the information you are sharing with the AI tool yours to share (e.g. do you own the rights to the information)? Is there any information in the content you intend to share that could be considered private, personal, or confidential?