Principles for the responsible use of Generative Artificial Intelligence in administrative work
Generative Artificial Intelligence (AI) is transforming administrative operations by automating tasks, enhancing decision-making, and improving efficiency. However, its integration comes with significant ethical, legal, and practical considerations. This guide aims to provide comprehensive guidance on the responsible use of generative AI within our administrative offices, processes and procedures. It is designed to be a living document, continuously updated to reflect new developments and insights.
Our principles are essential for ensuring that the adoption of AI tools aligns with our commitment to ethical practices, transparency, and equity. By following these guidelines, we can harness the benefits of AI while mitigating its risks.
The following principles apply to all members of the University community as they integrate generative AI into their administrative tasks. These guidelines generally seek to identify acceptable uses of generative AI, rather than specifying what is prohibited.
Users must take responsibility for verifying the accuracy of inputs and generated outputs of generative AI tools, decisions that are informed by the generated content, and the appropriateness of the purpose for which the tool is used.
Consider: Is it necessary or advantageous to use generative AI for this task? Is use of generative AI acceptable for this task? What are the risks and potential benefits and how do those balance out?
Users should actively engage in learning about the strengths and limitations of generative AI, and commit to continuous learning about the field, which is rapidly developing and changing. Users should seek to develop literacy in these tools, including understanding responsible, ethical, and effective use of the tools.
Consider: Do you know enough about the topic to identify errors and inaccuracies in responses? Have you considered what your quality control testing procedures will be before you rely on the response? Do you know enough about the AI tool you’re intending to use to explain how you used it to a colleague or your manager?
The use of AI tools and content generated by them should be informed by ethical practice applied to the choice of tool and the application for which it is used. Users should preferentially seek and utilise AI tools that have a demonstrated commitment to harmless, ethical, and safe applications, transparency, and advancing positive outcomes.
Consider: Do you know enough about the company offering the AI tool and their practices for training their AI models to make an informed decision about whether they are acting ethically? How does the tool handle user data?
Use of generative AI must comply with all applicable laws and regulations (e.g. FIPPA, the Ontario Human Rights Code, International Labour Law, Copyright, the new Federal Government Policy on Sensitive Technology Research and Affiliations of Concern etc.). Do not upload or share confidential, personal, private, health or proprietary information to a generative AI tool unless a privacy and data security risk assessment has been completed for the specific tool and sharing has been approved. Wherever possible, anonymize data before uploading. Wherever possible, use University endorsed generative AI tools.
Consider: Is the information you are sharing with the AI tool yours to share (e.g. do you own the rights to the information)? Is there any information in the content you intend to share that could be considered private, personal, or confidential? If you are not sure, verify.
The use of generative AI has the potential to significantly impact environmental sustainability and the University’s carbon footprint (scope 3 emissions) as a result of the energy, water, and other resources needed to train and run these tools. Wherever possible, users should preferentially select tools and models that have demonstrated a commitment to sustainability and in line with the UN Sustainable Development Goals.
Consider: As each interaction with an AI tool uses energy, water, and mineral resources, have you considered whether you are using the AI tool in the most efficient way, with the least number of prompts, and at the point in your work where it will be most helpful? Does the company who created the AI tool publicly share their energy usage for model training? Does the company share how they are attempting to minimise their environmental impact? Do you understand the potential environmental impact of using this technology?
Transparency of use is important for trust building. Where required, it must be apparent that generative AI was used in the process of developing that content.
Consider: Would you be comfortable describing to colleagues how and why you used AI to support this work? Where AI is considered acceptable to use, do you know how to appropriately acknowledge its use? Have you spoken to anyone in IT about the risk?
Employees and students should not upload or share confidential, personal, private, health or proprietary information with a generative AI tool unless a privacy and data security risk assessment has been completed for the specific tool. Do not upload work or data that is not your own to a generative AI tool.
Consider: Is the information you are sharing with the AI tool yours to share (e.g. do you own the rights to the information)? Is there any information in the content you intend to share that could be considered private, personal, or confidential?