Guidance for Instructors

Last modified: September 6, 2023


This page refers primarily to the AI writing assistant ChatGPT, but it should be noted that AI models are being used in other ways, for example, to generate videos, images, and conversations with customers. 

The field of AI and opinions about how to react to AI in academia are evolving rapidly. As an organization, Learning Innovation is working to keep up to date with this issue in order to provide more robust advice for faculty. Watch this page, as well as our events and blog, to learn more. We are developing additional materials to guide you through course design, class activities and assignments in light of AI. 

Looking for an AI policy? We have written an expanded guide to writing artificial intelligence policies.

Ready to incorporate AI in your class? Our resource on the design of AI assignments also has examples to get you started.

Quick Links

Impact of AI on Education

Shortcomings of AI

Opportunities AI Might Provide

Speaking to Students About AI

Recommendations for Course Policies

AI Detection Software

Artificial Intelligence Tools

Artificial Intelligence is any technology that attempts to solve problems and complete tasks that would usually require human intelligence. For decades there have been increasing advances in AI software and hardware. The most visible Artificial Intelligence (AI) tool at the moment is ChatGPT, a writing assistant that has been a topic of interest across academia and beyond in the past few months. ChatGPT can generate plausible written responses to a wide variety of user prompts and questions. Its power comes from a machine learning model that is able to predict content to prompts based on a large amount of data it has been trained on and a natural language model that makes the responses seem like human language.1 

To try ChatGPT you can visit OpenAI’s website and read the company’s FAQ for further information. Watch our workshop on how to use ChatGPT and Midjourney (image generator).

Some tasks that ChatGPT excels at are summarizing information, writing fiction, generating computer code, translating text from one language to another, generating text that summarizes data, and “talking” in a conversational manner. Examples of prompts might be “Write a three page paper on the effects of climate change” or “I am looking to buy a camera. Do you have suggestions?” or “Review my writing for errors.” The results of these prompts can be quite impressive, but it is important to note that ChatGPT is not technically thinking or emoting and lacks important human skills such as critical thinking and fact checking. You should not rely solely on AI to produce factual information.

Currently, Duke does not support any specific AI tools like ChatGPT, Bard or Midjourney. This means that we have not vetted these tools for important concerns like accessibility, data security and privacy. If you plan to implement any AI tools into your course, we have tips for conducting due diligence for unsupported tools. We also suggest that you review the terms of service to see how you and your students’ data might be used — some services note that user data will now be harvested to train AI.

Impact on Education

AI models will continue to grow in capabilities and be incorporated into word processing and search engines. The challenge for instructors is to discover how to incorporate Artificial Intelligence content generators as a tool in their teaching rather than view them solely as a threat. In the past, other technology tools such as multifunction calculators, spelling and grammar checkers, and statistical analysis software shifted the ways we learn and teach. As in those cases, educators will need to help students differentiate when AI can help with learning versus when it is a short cut around learning. 

It must be acknowledged that faculty’s bandwidth to address the emergence of AI is limited. Learning about AI and how it will impact teaching will take time, especially as it is a new trend. This is a likely multi-year shift in education and as such does not need to be tackled all at once. Several steps instructors can take:

  • Try AI generators and understand their capabilities. You may also want to take time to understand what happens on the backend of any AI tool and what data the tool is pulling from.
  • Update course policies to include considerations for using AI content. While you may not choose to integrate AI into your course, it is important not to ignore these technologies. Here are a few questions to ask yourself: First, do the values of the technologies align with your course values? Second, what do you want to communicate about these technologies with your students? Finally, what are the intellectual gains and guidelines if you do allow students to use AI? Review our further considerations for developing AI policies for more.
  • Learn about the ethical and legal questions surrounding the use of AI by exploring current conversations, plus reviewing the work of experts in AI and ethics. What activities or discussions might encourage students to understand their own values in relation to this technology? How might you handle any ethical objections students may have to using AI, if you implement activities in your course?
  • Review assignments to see if quick changes can be made to address AI concerns. Consider adding assignments that educate students about the strengths and limitations of AI and how it relates to your discipline.

Shortcomings of AI

ChatGPT creates text based on predictions and not critical thought. The text generated is based on a data set that cannot update itself. The design of the AI model leads to important limitations.


Its output is only as good as its input. AI retains all of the biases of the information it intakes, including the stereotypes and misinformation present in human writing on the internet.


Depending on the future funding models for AI assistants, there may be a gap between who does and does not have access to them. 


AI-generated content may contain factual errors, incomplete quotes and erroneous findings. There may be a new adage about the internet: “Don’t believe everything you read on the internet and what an AI bot generates based on the internet.” Furthermore, an AI technology may produce what is considered “good” content at some point, but that does not mean the technology will answer the prompt consistently or as well in the future.

Intellectual property  

It is not clear who owns AI generated content or the prompts created by users. This ongoing conversation may impact the use of AI now and in the future. Some AI technologies have been shown to plagiarize from other sources when creating “original” content.


Training AI models can produce negative impacts on the environment. AI models have been used to unethically replace workers and there have also been concerns that unethical labor was used to develop and maintain these tools.

Opportunities That AI May Provide

In a recent open discussion, several themes emerged among Duke instructors as opportunities posed by AI:


AI can help draft emails, blog posts, cover letters, article summaries allowing for time savings in everyday tasks and, potentially, in teaching as well.

Stimulate Thinking

Students could annotate an AI-generated text, use it to search for counter arguments during a group discussion or brainstorm an idea for a new project.


Used judiciously, AI can improve writing and debug code, which can benefit students and instructors alike.


Students might benefit from using AI as a tutoring aid. For example, it could help neurodiverse students who may struggle to initiate work. 

Reimagining Learning

As AI generated text becomes more commonplace, this may shift some of the goals of education. What does learning look like in the age of AI? What AI skills will be needed in careers? What will collaboration between AI and humans look like?

Rethinking Assignments

In the age of AI, instructors may change what they define as acceptable evidence of learning. The design of assignments might shift to center on personalized learning, collaborative work, self-reflection and the real-world application of content. 

Speaking to Students about AI

The subject of cheating and AI must be explicitly addressed in the syllabus and potentially in class as well. When speaking to students about AI, the conversation should move beyond just any penalties for using ChaptGPT to also explain why true engagement with texts and ideas is important. 

Open Dialogue

Initiating a direct conversation about the use of AI is an opportunity to explain the impact of the course on their intellectual development. What will they lose intellectually if they use AI to complete their assignments? Students should understand that learning is difficult and challenging, but that is the point of education

Original Thought

Instructors can emphasize why original writing (or coding or creativity) matters and what it means to develop your own voice and ideas. Understanding how these skills will help them in careers and further study in your discipline can motivate students to avoid unwanted use of AI. 

Limitations of AI

Students should be made aware of the limitations of AI content to help them understand why passing it off as original content is not advisable. If students are allowed to use AI, they need to understand how AI text must be reviewed on many levels before incorporating it into their own writing.

Recommendations for Course Policies

We suggest that faculty make it clear to students what their expectations are regarding the use of AI at the outset of the particular course. If AI is banned outright, explaining how the use of AI can constitute cheating or plagiarism is a must. If you allow AI, be sure to differentiate between acceptable vs. unacceptable use and proper citation. For example, you might allow the use of AI for generating early ideas and drafts and explain to students how to track changes as they edit the original AI text, but you might choose to expressly forbid the use of AI to develop other written assignments.

Ultimately, instructors have discretion in setting specific AI policies to fit their course and individual assignments. There is no one-size-fits-all policy. Below are some general principles and we have written an expanded guide to writing artificial intelligence policies.


Instructors should choose policy language that makes it clear that students should not copy, quote, paraphrase or summarize any source without adequate documentation. As a baseline, faculty can emphasize that along with use of uncredited ideas and content created by persons, AI-generated content falls under the definition of plagiarism at Duke. Sample language might be “All work submitted in this course must be your own. Contributions from anyone or anything else—including AI sources—must be properly quoted and cited every time they are used.”2


For any assignments that are meant to be original work, it can be required that students not use AI. Sample language might be “If an assignment requires you to use your own critical thinking, solve problems, or practice concepts or skills, do not use ChatGPT.” Duke students can be reminded of Duke’s community standard and its core values of honesty, fairness, respect, and accountability, which are important for academic work but also personal integrity. 


Students should understand how to cite or give credit to AI generators. They can list ChatGPT as a reference work or quote it within their work. MLA, APA, and other style guides have developed specific guidelines. Instructors can also ask students to retain original conversations and prompts and append them to writing assignments.

Acceptable Use

There may be cases when instructors either allow or encourage students to use AI for tutoring or help with drafts. It is important to give students guidelines of what is acceptable or not. Sample language might be “Students are allowed to use AI to help revise this writing assignment, however when submitting work, students must clearly identify any writing, text, or media generated by AI. This can be done in a variety of ways, e.g., by highlighting the text in a different-colored font or explaining what parts have been AI-generated in a cover letter.”3

AI Detection Software

Learning Innovation does not endorse any software or programs that claim to determine if any work was produced by AI. While instructors can use detection software, it should not be considered a definitive measure of cheating. Instead, we’d like to offer some considerations on how to design assessments that make using AI content ineffective.

False Positives and Negatives

These are early days for detection software and they are by no means foolproof when detecting AI text. In the long term, relying on these tools will not be an effective strategy due to advances in AI. The software can be biased; a recent study showed that non-native speakers’ writing was flagged as AI-generated more than native speakers.

Interventions Confuse Software

Changes such as replacing repeated sentences, asking the AI to reword its content or copying text from one AI model to another can increase the likelihood of passing detection.

Student Privacy

The use of plagiarism detection can signal that students should not be trusted, and the tools invade their privacy by collecting their work.

What’s Next?

Interested in further exploration of the impact of AI on teaching? Here are a few other topics to explore.


  1.  For an introduction to AI and its programming models, refer to this piece written specifically for educators.
  2.  This policy language, plus many other examples, are being collected as part of a collaborative project.
  3.  This policy wording, plus other student guidelines, come from Joel Gladd, College of Western Idaho, under creative commons licensing.