Context

Artificial Intelligence (AI) tools (such as ChatGPT) are widely available and can simulate human-generated text, video, audio, and images in a variety of contexts and with increasing sophistication. These tools are powerful and have the potential to enhance or degrade a learning experience depending on pedagogical context. Primarily, learning experiences should be designed in a way that AI can supplement, rather than substitute for, student learning. Instructors are empowered to assess learning in a way that makes sense for their class – including considerations of whether and how AI can or should be used in the course. As this can be a complex task, please consider the following recommendations.

Recommendations

Understand the Tools

Instructors should familiarize themselves with common generative AI (genAI) tools and their capabilities (e.g.: ChatGPT, Microsoft Copilot, Claude, or Google Gemini) so that they have the clarity necessary to develop an informed approach to AI within their course. WashU has developed a FERPA and HIPAA-compliant version of ChatGPT that makes it an ideal tool for experimentation. You can access the tool here: WashU ChatGPT. Understanding how WashU ChatGPT responds to your assignment prompts can position you to design more effective tasks in the era of artificial intelligence.

Consider Course Outcomes

Instructors should re-evaluate course learning outcomes and associated assessments in the context of the current availability and rapidly increasing sophistication of AI tools. The Center for Teaching & Learning has resources to support instructors in all areas of teaching, including in the context of generative AI.

  • Where use of AI tools would undermine or shortcut the learning goals of coursework, instructors could consider modifying assessments to minimize the effectiveness of AI in constructing responses to assignment prompts or questions.
  • If AI tools can enhance student learning and/or prepare students for appropriate use of these tools in their future careers, instructors could consider incorporating them into coursework.
Develop and Communicate Clear Guidelines

Instructors should clearly outline and communicate course AI guidelines to students within the course syllabus and reiterate these often, such as within individual assignment guidelines and during class discussions.
 
At minimum, course AI use guidelines should include whether the use of AI tools for class work is permitted or not permitted. The following statements can be adapted to your needs:

Generative AI Use is Allowed with Citation
Instructors expect to be evaluating your own independent thinking. While in this course you are free to use generative artificial intelligence (GenAI) tools, you should keep in mind that instructors expect to be evaluating your own independent thinking. Remember that AI-produced content is essentially crowd-sourced information and may or may not be accurate or appropriate—it is your responsibility to assess its credibility and accuracy. Ultimately, you are the final author of any products of this class and are thus accountable for their accuracy, credibility, and rhetorical usages. In addition, it is ethically necessary to provide attribution for original thought by citing sources. Thus, whenever you use an AI tool to assist you, you should cite it, including the prompts used. Please follow the appropriate citation guidelines for citing GenAI tools in MLA, APA, Chicago, and Turabian Style.
 
Generative AI Use is Allowed with Limitations
Instructors expect to be evaluating your own independent thinking. In this course, generative artificial intelligence (GenAI) may be used with some limitations. Remember that AI-produced content is essentially crowd-sourced information and may or may not be accurate or appropriate—it is your responsibility to assess its credibility and accuracy. Ultimately, you are the final author of any products of this class and are thus accountable for their accuracy, credibility, and rhetorical usages. In addition, it is ethically necessary to provide attribution for original thought by citing sources. Thus, whenever you use an AI tool to assist you, you should cite it, including the prompts used. Please follow the appropriate citation guidelines for citing GenAI tools in MLA, APA, Chicago, and Turabian Style.

Note to instructor: if this option is chosen, it will be necessary to specify how AI can or cannot be used within the course or on individual assignments are outlined. Please see Additional Considerations for guidance.

Generative AI Use is Restricted Completely
It is vital to learn the fundamental concepts and develop the critical skills of our field without using outside aids. Part of that learning process involves thinking through and processing challenging material without any help from generative artificial intelligence (GenAI). Work produced by GenAI typically reuses information and language from external sources without citation and thus the submission of any work created in whole or in part by GenAI tools cannot be considered to be original in our course setting. Keep in mind that coursework produced with GenAI could violate ethical standards as well. I therefore ask that you do not make use of any GenAI to help complete any work in this course. The use of GenAI in this course will be considered an academic integrity violation and would be referred to the university academic integrity process. Violations could potentially result in grade penalties, such as a zero on the assignment, and/or other university-level sanctions.
 
Additional Considerations
In addition to the baseline verbiage above, offering further guidance can benefit students by increasing transparency and clearly communicating classroom expectations. This extra support might include:

  • The appropriate context and/or examples for AI use within the course. For example, creating outlines, summarizing concepts, responding to discussion posts, revising titles, summarizing content, generating code, creating outlines, individual tutoring purposes, and etc.
  • Expectations for use of grammar checkers, translation tools, and paraphrasing tools (such as Grammarly, QuillBot, etc.).
  • Additional information regarding how the use of AI tools should be cited or described within student work.
  • Whether AI detection tools will be used to identify work that may be in violation of course policies. Please note that use of AI detection tools is not recommended due to their unreliability (see additional information under “Artificial Intelligence & Academic Integrity”).
  • Whether and how the instructor might request evidence of student workflow if inappropriate use of AI is suspected (such as a request for process notes or earlier versions of final work).
  • The way in which inappropriate use of AI will be approached within the course (e.g.: “first violation results in a 0, subsequent violation results in a 0 + report to the Academic Integrity Officer”).


 
The Center for Teaching and Learning has many resources that can guide your consideration of generative AI and teaching including recommended language for a variety of course AI policies.

Artificial Intelligence and Academic Integrity

If you suspect a student of using artificial intelligence without authorization in your course, you are welcome to connect with an Academic Integrity Coordinator for a consultation. Please reach out via email at academicintegrity@wustl.edu.

Considerations

Instructors should not base accusations of academic misconduct due to misuse of AI solely upon the results of any AI detection tool. Tools that claim to be able to discern AI-generated text or images (such as Turnitin’s AI detector) are unreliable as they are subject to:

  • false positives, particularly if human-written text is first corrected using digital grammar checkers or contains often-flagged components. 
  • false negatives, such as when AI-written text is altered or if AI tools are told to write text that does not appear to be written by AI tools.
  • bias, often leading to disproportionate flagging of writing by non-native English speakers.
  • Additionally, AI detection tools cannot explain the algorithmic logic behind a determination that a sample of text was written by AI. This is in stark contrast to plagiarism detection tools (such as Turnitin) that can match the text of a student assignment to sources available on the internet.

It is important to note that many technologies now automatically incorporate AI into their workflows (e.g.: Microsoft Office’s integration of Copilot and Google’s integration of Gemini). Thus, students may not be aware of their utilization of AI.

Falsely accusing a student of an academic integrity violation can be damaging to the mental health of a student as well as damaging to the relationship between an instructor and the students of the class. The necessity of upholding the integrity of our degrees and the learni

Gathering Evidence

Due to the current deficiencies in AI detection technology, if an AI-related violation of academic integrity is suspected, additional lines of evidence should be collected. Some supporting evidence could include (but is not limited to):

  • Strong similarity of the student’s submitted work to the output of AI tools when the assignment prompt is entered by the instructor.
  • Inclusion of fake/non-existent references, quotes, or other details in submitted work.
  • Terms referenced that have no connection to the subject/object of analysis at hand.
  • Topic sentences that are reused in several paragraphs.
  • Sentences contain logic that is circuitous or never makes a definitive point.
  • Awkward repetition of specific phrases or words throughout the text.

Note, however, that many of these features are not unique to AI-generated text and are becoming less common in AI-generated text as the technology improves. More definitive evidence could include (but is not limited to):

  • Comparison of student work throughout the course; if the work drastically deviates, this may be a concern.
  • Results of a discussion with the student in which they were unable to reiterate the theme or argument of the work or identify the source of information contained in the work.
Questions to Ask the Student

Transparency is key. If you are willing and able to discuss your concerns with the student, consider asking the following questions:

  • When did you start working on this assignment? On approximately which days/dates did you spend time working on this assignment?
  • I would like to review your work leading up to the submitted assignment. Would you be able to share your notes, outlines, drafts, and/or version history with me?
  • What resources did you access? Which ones did you use? Which ones did you decide not to use? Why?
  • Did you access any writing or translation tools?
  • Did you encounter any problems or obstacles in working on this assignment? What were they?
  • What part of the assignment did you find most interesting?  Did you learn anything new from completing this assignment (and what was it)?
  • Ask the student to explain a concept or specific part of the assignment in their own words.
  • If you had to write 5 more pages, where do you think you would’ve taken your paper? (Or, if you had to complete the STEM assignment in a different way, how would you have approached it?)
  • When you reflect on your work, is there anything you would have done differently?

If the student’s responses are not satisfactory, it is possible that they incorporated AI output for part or all of the assignment.

If you have any questions about academic integrity concerns, including those pertaining to AI, please reach out to an Academic Integrity Coordinator for a consultation at academicintegrity@wustl.edu.

Additional Resources

The Center for Teaching and Learning (CTL) is maintaining and updating resources to support teaching and learning in the context of AI, including several workshops which can be found on the Events & Workshops. Additionally, you may consider reaching out directly to the CTL’s Assistant Director for Teaching Innovation for a one-on-one consultation regarding approaches to teaching in the age of AI.