Body
Ethical Use of AI at CSS
Created Fall 2025
This statement articulates The College of St. Scholastica’s values and philosophies around the use of artificial intelligence (AI).
This statement is not intended to dictate whether or not AI is or is not used in a particular situation; rather, it is meant to inform all members of our campus community—students, staff, and faculty—about our shared understanding of an ethical approach to AI.
An ethical approach to AI in teaching, learning, and research at The College of St. Scholastica is:
- Respectful of academic and intellectual freedom: We honor the freedom to inquire, teach, and publish with or without AI. Individuals maintain autonomy, but also retain responsibility, over their use of AI in teaching, learning, research, and institutional work.
- Human-centered and meaningful: AI tools are used only to facilitate our work, not displace the learner, colleague, teacher, or researcher, and we are also mindful of the possible cognitive costs for users themselves. AI may be used as an assistant to meet clearly stated outcomes, and human users remain accountable for judgement, context, critical thinking, and meaning-making.
- Transparent: AI use should be disclosed and/or cited when appropriate, with enough detail to understand what the tool did and what the human decided.
- Responsive to environmental and social impacts: We encourage the prioritization of smaller, efficient models or non-AI methods that achieve the same outcome with less resource use. AI should be used judiciously, representing our commitment to environmental stewardship. We discuss, disclose, share, or otherwise acknowledge these impacts wherever appropriate.
- Aware of and prepared to mitigate errors, bias, and inconsistency: AI outputs should not automatically be read as truths, because AI predicts likely next words from imperfect, biased training data. We discuss, disclose, share, or otherwise acknowledge, and try to mitigate, these biases and errors wherever appropriate, and we are vigilant of the biases we may bring to our prompts.
- Committed to academic integrity: When AI use is permitted, attribution, verifiable sources, and outcome-appropriate assessment are required.
- Secure and attentive to data privacy: We default to vetted tools when sensitive data is involved and never input confidential, personally identifying, or third-party proprietary material into public AI systems. We are mindful of all content we share with AI tools and always remain aware of how each tool might use those inputs.
- Equitable: All students, faculty, and staff have institutionally vetted or no-cost access to AI tools.
- Accessible: We strive to adopt accessible AI tools whenever possible, and AI may also serve as an approved accessibility or learning support tool for some members of the community.
- Evolving and committed to continuously enhancing our AI literacy: As a community committed to a love of learning, we acknowledge a willingness to learn, question, and adapt to changes in AI tools or uses over time, given our different needs and roles on campus.
In our teaching, we value professional judgment that allows faculty to make informed decisions about when, where, and how to use AI in their courses.
In our learning, we value rigorous, human-led learning where AI can be an assistant not a replacement to critical thinking, users are transparent about its use, and privacy and equity are protected.
In our research, we value human-led inquiry where AI may assist, methods are transparent, data are protected, and every claim is verified.
In all of our work at CSS, we value thoughtful, human-led decision-making.
These values and philosophies were developed by the Academic Working Group of the larger AI Governance Advisory Group in the Fall of 2025. This statement was written without the assistance of AI. We acknowledge that, given the ever-evolving AI landscape, this statement will likely need to be revised and edited over time to ensure that it always reflects our institutional priorities.