Guidelines for Using Generative AI Tools at Trinity University
Sent to the Trinity Community by the Vice President for Finance and Administration, Provost and Vice President for Academic Affairs, and Chief Information Officer in August 2023:
The University supports responsible experimentation with generative AI tools, such as ChatGPT and Google Bard. However, there are important considerations to keep in mind when using these tools, including information security, data privacy, compliance, copyright, and academic integrity. Generative AI is a rapidly evolving technology, and Trinity will continue to monitor developments and update these guidelines as needed.
- Protect confidential data: Do not enter personally identifiable, financial, or non-public research data into public AI tools. See Trinity’s FERPA, Information Security Policy, and Policy on Ethical Use of Data.
- You are responsible for your output: AI-generated content may be inaccurate, misleading, or infringe on copyright. Always review and fact-check before sharing or publishing.
- Respect academic integrity: Faculty set policies on AI use in courses. Students should ask instructors directly if they’re unsure what’s permitted.
- Be alert for AI-enabled phishing: Generative AI makes scams harder to detect. Follow IT security best practices and report suspicious emails to ITSupport@trinity.edu.
- Consult ITS before procuring AI tools: Ensure vendor tools meet Trinity’s privacy, security, and risk management standards.
Sub-points:
- Read vendor terms and conditions carefully. ITS can assist in reviewing them.
- All vendor AI tools must be assessed for risk through Trinity’s Risk Management Office.
These guidelines are not new University policy; they underscore and support existing policies.