Skip to Content

Center for Teaching Excellence

  • Generative Artificial Intellligence in the Classroom

Applying Guiding Principles for AI Use in Teaching

The Guiding Principles for AI Use in Teaching provide a framework for responsible and effective use of AI. Offered here is practical guidance to help faculty apply those principles, including reflection prompts, strategies, examples, and use cases. These expanded materials are designed to support intentional, discipline-specific decision-making about when and how AI can enhance teaching and learning.

Effective teaching still depends on faculty presence, expertise, and interaction with students.  While AI tools can streamline and support certain instructional efforts, faculty should leverage their knowledge and judgment to determine whether, when, and how AI can best enrich student course experiences, critical thinking, and learning. 

Reflection Prompt  

Consider how you might use AI to streamline preparation while ensuring your teaching presence, expertise, and course learning outcomes remain central. 

Context 

AI can assist with instructional preparation tasks such as drafting rubrics, generating sample explanations, creating first drafts of instructional materials, or proposing assessments. These uses can free up time and support instructional planning. 

At the same time, faculty judgment remains essential—especially in grading and feedback, where AI may assist with drafts or surface-level patterns but cannot replace personalized, contextual responses that support deeper learning. Thoughtful integration of AI means deciding when its use enhances learning and when direct faculty engagement is most important. 

Strategies 

  • Use AI to save time on preparation tasks (e.g., rubrics, sample explanations) so you can focus on direct student engagement. 
  • Design assignments and activities that include AI in ways that directly support your course learning outcomes. 
  • Use AI as a support tool but rely on your expertise for grading and feedback that shape deeper learning. 

Examples  

  • Example 1: Use AI to draft a rubric for a graded assignment, then revise it to highlight disciplinary skills and course-level expectations before sharing it with students. 
  • Example 2: Generate possible assessment prompts with AI, then refine them to ensure they align with your learning outcomes and instructional goals. 
  • Example 3: Use AI to suggest a basic explanation of a core concept, then adapt it with your expertise to add depth, context, and connections important to your discipline. 

Use Case: Scenario 

A professor teaching a persuasive writing course uses AI to generate an initial draft of a rubric for a major assignment. After reviewing it, she revises the criteria to emphasize rhetorical strategies, audience awareness, and critical thinking aligned with her course learning outcomes. She then shares the revised rubric with students to guide their work. 

Takeaway 

  • AI can support instructional efficiency, but it does not replace faculty expertise. 
  • Faculty are responsible for determining when and how AI meaningfully enhances learning. 
  • Grading and feedback are practices where faculty presence and personalized responses remain essential.  

At a minimum, faculty must convey in their course syllabi what is allowed or prohibited, as well as any responsibilities students have for documenting their AI use. Given the varied types and ever-changing applications of AI, faculty should also encourage ongoing, context-specific, and frank conversations with students to provide necessary guidance and prevent confusion or misunderstanding. 

Reflection Prompt 

Consider how you will communicate expectations for AI use in ways that are clear, consistent, and appropriate for your course context. 

Context 

Clear expectations help prevent confusion and promote fairness. Because AI tools and their uses vary widely across disciplines and assignments, students benefit from explicit guidance about what is permitted, what is restricted, and why. Communicating expectations through both written policies and ongoing conversations helps students understand how AI fits into their learning and reduces misunderstandings about acceptable use. 

Strategies 

  • Clearly state expectations for AI use in your syllabus, including what is allowed, prohibited, or conditional. 
  • Reinforce expectations in assignment instructions and class discussions, especially when introducing new types of work. 
  • Provide guidance on how students should document or acknowledge their AI use when it is permitted. 

Examples  

  • Example 1: Include a syllabus statement that allows AI for brainstorming ideas but prohibits submitting AI-generated text as final work. 
  • Example 2: Explain in class that AI may be used to practice coding or problem-solving, but not for graded assignments. 
  • Example 3: Require students to include a brief process note describing how, if at all, AI was used in completing an assignment. 

Use Case: Scenario 

An instructor teaching an upper-level sociology course outlines in the syllabus that AI tools may be used for brainstorming research questions but not for writing final papers. Before the first major assignment, the instructor reviews these expectations in class and explains the rationale. Students are asked to include a brief process note with each submission describing whether and how AI was used. 

Takeaway 

  • Clear communication about AI use supports fairness and transparency. 
  • Expectations are most effective when reinforced through both written guidance and conversation. 
  • Helping students understand when and how AI may be used reduces confusion and supports responsible learning.  

Faculty can model self-reflective, responsible, and ethical AI practices for their students as well as their colleagues.  If AI was used to develop course materials, create assessments, or generate feedback, faculty should be forthcoming and prepared to discuss the rationale, scope, and implications of its use.  Transparency of this sort fosters trust and helps reinforce community norms about AI use.  

Reflection Prompt 

Consider when and how being transparent about your own use of AI can support trust, clarity, and responsible use in your course. 

Context 

Faculty may use AI in ways that differ from student use, but transparency helps students understand how professional judgment and instructional intent shape those decisions. Being open about AI use provides opportunities to discuss why AI was used, how it was evaluated, and where human expertiseremains central. Transparency does not require disclosing every instance of AI use; rather, it involves making thoughtful decisions about what information will help students understand expectations and community norms. 

Strategies 

  • Be forthcoming about AI use in course design, assessments, or feedback when it supports trust and understanding. 
  • Explain the rationale and scope of AI use, including how you reviewed or revised AI-generated content. 
  • Use transparency as an opportunity to model ethical judgment and responsible decision-making. 

Examples  

  • Example 1: Explain to students that AI was used to draft an initial rubric or assignment outline, and describe how you revised it to align with course goals. 
  • Example 2: Include a short statement in your syllabus describing how AI may be used in course preparation or instructional materials. 
  • Example 3: Share an example in class where you assessed the strengths and limitations of an AI-generated output before using it. 

Use Case: Scenario 

A lecturer uses AI to generate an initial set of discussion questions for a seminar. Before sharing them with students, she revises the questions to reflect course readings and learning outcomes. She explains to students that AI was used as a starting point and describes how she evaluated and refined the questions, using the example to model responsible and reflective AI use. 

Takeaway 

  • Transparency about faculty AI use supports trust and shared understanding. 
  • Discussing the rationale and scope of AI use helps reinforce ethical and responsible practices. 
  • Modeling reflective AI use helps establish clear community norms for teaching and learning.  

AI tools are powerful but imperfect.  They can introduce or be premised on errors, bias, and misinformation.  Faculty should always review and edit AI outputs before sharing them with students or relying on them in teaching to ensure their accuracy, quality, and appropriateness.  

Reflection Prompt 

Consider how you will review and verify AI-generated content before using it in your teaching. 

Context 

Generative AI can produce content quickly and convincingly, but it does not reliably distinguish between accurate and inaccurate information or account for disciplinary context and nuance. When AI is used in course materials, assessments, or instructional support, faculty remain responsible for ensuring that content is accurate, inclusive, and aligned with course goals. Careful review and verification help prevent the spread of errors or bias and reinforce academic standards. 

Strategies 

  • Review all AI-generated content for accuracy, relevance, and alignment with course learning outcomes before using it. 
  • Evaluate AI outputs through a disciplinary lens, checking claims against course materials, scholarly sources, or professional standards. 
  • Edit and adapt AI-generated content to reflect appropriate tone, context, and instructional intent. 

Examples  

  • Example 1: Use AI to draft an explanation of a complex concept, then verify key claims against course readings and revise for clarity and accuracy. 
  • Example 2: Generate sample assessment questions with AI and edit them to ensure they accurately assess the skills and knowledge emphasized in the course. 
  • Example 3: Review an AI-generated case study to identify assumptions or biases before sharing it with students. 

Use Case: Scenario 

A graduate teaching assistant supporting an introductory course uses AI to draft a brief overview of a complex concept. Before sharing it with students, the GTA checks the content against course materials, revises unclear explanations, and adds examples discussed in class. The final version reflects course expectations rather than the AI’s original output. 

Takeaway 

  • AI-generated content should never be used without faculty review. 
  • Verification and editing are essential to maintaining accuracy and academic quality. 
  • Faculty judgment ensures AI outputs are appropriate for the course context and learning goals.   

Student work as well as personal and institutional data should never be uploaded into public AI tools, which may store or reuse this information.  AI use also elicits a number of social, legal, and ethical questions for education, research, and the workplace worthy of consideration.  University-approved AI tools satisfy privacy and security standards for faculty use, and the Garnet AI Foundry website maintains a current list of approved tools and integrations. 

Reflection Prompt 

Consider how your use of AI tools protects student privacy and aligns with institutional expectations for data security. 

Context 

Many public AI tools retain or reuse information in ways that are not transparent to users. When student work or personal data is entered into these systems, it may be stored or repurposed beyond the instructional context. Protecting privacy therefore requires faculty to make careful decisions about which tools they use and what information they share, while also considering the broader ethical and legal implications of AI use in education and beyond. 

Strategies 

  • Avoid uploading student work, personal information, or institutional data into public AI tools. 
  • Use university-approved AI platforms that meet privacy and security standards for instructional purposes. 
  • Design AI-related activities that rely on anonymized, hypothetical, or instructor-generated content rather than real student data. 

Examples  

  • Example 1: Use a university-approved AI tool to generate sample discussion prompts instead of uploading student submissions into a public platform. 
  • Example 2: Ask students to analyze AI-generated scenarios or fictional datasets that do not include real student information. 
  • Example 3: Create an activity where students evaluate AI outputs using provided examples rather than their own coursework. 

Use Case: Scenario 

An instructor considers using AI to provide feedback examples during class. Rather than uploading student assignments into an AI tool, the instructor uses anonymized prompts and fictional examples generated with a university-approved AI platform. This allows the instructor to demonstrate feedback strategies while protecting student privacy and avoiding the use of identifiable student work. 

Takeaway 

  • Protecting student and institutional data is essential when using AI in teaching. 
  • Public AI tools may store or reuse information in ways that compromise privacy. 
  • University-approved tools and thoughtful activity design help safeguard data while supporting instruction.  

Rather than police and penalize AI use, help students develop AI literacy so they are equipped to scrutinize AI outputs, recognize potential errors or bias, and consider the appropriateness of AI within their disciplines and lives. AI detection tools are notoriously unreliable, and the insights and skills associated with AI literacy will help students use AI responsibly in their academic and future professional contexts. 

Reflection Prompt 

Consider how you can help students build the skills to critically evaluate and responsibly use AI rather than relying on detection or enforcement. 

Context 

Students are increasingly encountering AI tools in academic, professional, and personal settings. Simply restricting or policing AI use does not help students develop the judgment needed to use these tools responsibly. By emphasizing AI literacy, understanding how AI works, where it can fail, and when its use is appropriate, instructors can support deeper learning and prepare students to make informed decisions about AI in their disciplines and beyond. 

Strategies 

  • Design activities that ask students to evaluate the accuracy, bias, or limitations of AI-generated content. 
  • Discuss the appropriate and inappropriate uses of AI within your discipline and course context. 
  • Focus on building student understanding and ethical judgment rather than relying on AI detection tools. 

Examples  

  • Example 1: Ask students to compare an AI-generated explanation of a concept with course readings and identify inaccuracies or gaps. 
  • Example 2: Facilitate a class discussion about when AI use supports learning and when it may undermine skill development. 
  • Example 3: Include a short reflective prompt in an assignment asking students to explain how they evaluated any AI output they used. 

Use Case: Scenario 

An instructor teaching an upper-level course notices students experimenting with AI tools. Instead of prohibiting their use, the instructor designs an activity where students analyze an AI-generated response, identify its strengths and weaknesses, and discuss how disciplinary knowledge informs their evaluation. The activity helps students practice critical judgment and understand the limits of AI-generated content. 

Takeaway 

  • Building AI literacy equips students to use AI responsibly and critically. 
  • Detection and policing are less effective than teaching evaluation and judgment. 
  • Helping students scrutinize AI outputs supports learning beyond the classroom. 

By continuing to develop their own understanding of AI, faculty can better adapt their teaching and support their students.  The Center for Teaching Excellence offers professional development resources and sponsors communities of practice for AI users across campus.  Faculty should also explore opportunities for engagement through their professional associations as the possibilities, value, and implications of AI use vary across disciplines. The Garnet AI Foundry serves as USC’s central hub for AI innovation, collaboration, and support for faculty, staff, and students. 

Reflection Prompt 

Consider how strengthening your own AI literacy can help you adapt your teaching and better support student learning. 

Context 

AI technologies and their uses in higher education continue to evolve. As tools, expectations, and practices change, faculty benefit from staying informed about both the possibilities and limitations of AI in teaching and learning. Ongoing engagement with professional development and disciplinary communities helps faculty make informed decisions about how AI fits within their courses and academic fields. 

Strategies 

  • Engage in professional development opportunities focused on AI in teaching and learning. 
  • Participate in communities of practice to explore ideas, ask questions, and learn from colleagues. 
  • Seek out discipline-specific guidance through professional associations to understand how AI intersects with field-specific practices and standards. 

Examples  

  • Example 1: A faculty member attends a CTE workshop on generative AI to explore emerging teaching practices and considerations. 
  • Example 2: A faculty member participates in a Generative AI community of practice to share experiences and learn from colleagues across disciplines. 
  • Example 3: A faculty member reviews guidance from a professional association to better understand AI-related issues specific to their field. 

Use Case: Scenario 

A faculty member teaching in a rapidly evolving discipline recognizes that AI tools are becoming more common in both academic and professional contexts. To stay current, the faculty member participates in CTE-sponsored professional development and joins a campus community of practice focused on AI. Insights from these experiences inform course updates and help the faculty member guide students in thoughtful, responsible AI use. 

Takeaway 

  • Building AI literacy is an ongoing process that supports effective teaching. 
  • Professional development and peer learning help faculty adapt to emerging AI practices. 
  • Discipline-specific engagement ensures AI use aligns with academic and professional standards.  

 


Challenge the conventional. Create the exceptional. No Limits.

©