Button Menu

Guidelines for the Use of AI Tools at DePauw University

Artificial Intelligence (AI) refers to technology that uses machine learning and other computational methods to perform tasks that typically require human intelligence. This includes AI tools that create new content (text, images, music, videos, code) based on user prompts, as well as AI systems that analyze, organize, or assist with various tasks such as calendar scheduling, data analysis, research assistance, and decision support.

DePauw University supports responsible experimentation with AI tools while recognizing important considerations around information security, data privacy, compliance, copyright, and academic integrity. These guidelines leverage and extend existing University policies to address the unique aspects of AI use across campus. Users are encouraged to stay informed and exercise responsible judgment when utilizing these tools.

These guidelines are updated periodically to reflect evolving technology and best practices.

Alignment with Existing University Policies

The use of AI tools at DePauw University must adhere to the university’s Electronic Communications and Acceptable Use Policy and other relevant University policies.

The use of these tools should also respect any third-party contracts or agreements.

Data Classification and Confidentiality

Institutional data usage is subject to the University’s Data Classification policy and any other applicable policies, regulations, or existing contractual agreements.

In line with DePauw’s Data Classification policy, any data classified as Private or Restricted should not be input into AI tools. This includes, but is not limited to:

  • Student records (FERPA) and personally identifiable information (PII)
  • Employee personal information (PII) and HR records, including faculty review files
  • Financial data and payment information
  • Research data subject to confidentiality agreements or IRB restrictions
  • Medical or health information (HIPAA)
  • Legal documents, contracts, and privileged communications
  • Donor information and advancement records

Many AI tools operate within systems that already have access to University data (such as calendars, email, or learning management systems). When using AI features in these platforms, verify what data the AI component can access and how it processes that information and ensure the AI functionality complies with existing data use agreements.

Content Responsibility

Users are responsible for any AI-generated content they produce or publish. This content can sometimes be inaccurate or contain copyrighted material. It's crucial to review and verify AI-generated content before its publication or use.

AI systems can reflect biases from their training data. Consider whether AI recommendations could unfairly impact certain groups and seek diverse perspectives for important decisions.

Clearly identify when content has been AI-generated or when AI significantly assisted your work.

Academic Integrity

Adherence to policies on academic integrity is mandatory. Faculty and students should refer to the Academic Handbook, respectively, for guidelines on using AI tools in academic work and research.

Faculty should clearly communicate their policies on the use of AI tools, and students should seek clarification as needed.

Awareness of AI-Enabled Security Risks

Be aware that AI tools create new security challenges, including the following:

  • AI-powered phishing and social engineering attacks that are highly convincing
  • Risk of accidentally sharing sensitive information through AI interactions
  • AI-generated deepfakes and impersonation attempts
  • Malicious AI tools designed to harvest data

Follow security best practices, verify unusual requests even if they seem legitimate, and report suspicious activities to Information Services.

Procurement and Risk Assessment

Consult with Information Services if you have questions about or are considering procurement of a new AI tool. They aim to help you accomplish your goals while keeping DePauw’s data safe.

Be aware that many existing tools and platforms are integrating AI functionality as “new features”. For users, this has implications for being mindful that what may appear as simple search or guide tools in a product may actually be an AI component and all of the above considerations should be applied. 

If you have questions

Reach out to Information Services, FITS or the Library for assistance in thinking through this if you are unsure about how to engage with AI tools or other emerging technologies.

Related policies and resources

DePauw University Policies and Guidelines

Other References and Resources:

These guidelines regarding generative artificial intelligence are informed by and based upon the guidelines available at Harvard University's Guidelines for Using ChatGPT and Other Generative AI Tools, Hamilton College’s Generative AI Guidelines, and William College’s Guidance for Working with Generative AI Tools. Portions of this document were produced with the assistance of Claude.AI.  These guidelines have been customized to align with DePauw University’s specific policies and standards.

Last update: September 10, 2025