AI in My Mathematical Teaching

What Actually Worked (and What Didn't)

Leo Petrov
University of Virginia
CMU/Online - June 25, 2025

The Core Principle

Expert use requires expertise. Non-reflective use does not lead to learning

(Think automated translation vs learning a language)

This understanding should shape our use of AI in teaching.

1. What Actually Worked: Lecture Notes

Generated complete lecture notes for Calc 3 (Fall 2024) and Topics in Random Matrices (Spring 2025)

My reality: Almost the same time investment (and a more focused one!). Much better artifacts to share with students. Student feedback:

"Typed lecture notes were amazing companion to in–person classes, allowing to actually think during the class rather than trying to copy notes from blackboard"

"...the lectures ended up a bit rushed... Previewing content via posted notes before the lecture did provide some help with this."

"Additionally, the typed lecture notes on the website were extremely helpful."

Examples (see also https://lpetrov.cc/rmt25/)

           

2. The Failed Experiment: AI-Encouraged Harder Problems in Calculus

What I tried: Gave computationally intensive problems

  • "Compute volume of intersection of 2 (or 3) cylinders"
  • "Optimize the torus: for given surface area, find maximal volume"
  • Show that if , and are three-dimensional vectors, then

My thinking: Students use AI, and learn concepts instead of focusing on arithmetic

Reality: Some students were not comfortable with AI, so they had to struggle through these problems without help.

What Students Said (And... No Positive Comments)

"I did not really like the AI based questions on the written homework. I found that they actually made it more difficult for me to learn the information as these problems were typically so arithmetically challenging so that AI could assist but the AI would complete the problem incorrectly and teach you the wrong methods of computation."

"I think that in the future, less focus on using AI tools in written homework and more of a focus on providing practice would be helpful in teaching students the content."

"While the AI allowed problems were very interesting, I felt like sometimes they were too complex for the scope of the course"

"The webassigns and the practice problems provided in class helped, but I would say the homework with the AI function did not help me out as much to understanding the concepts being learned"

"... [I would suggest] also get rid of the homework question that uses AI as a form of help"

"While the AI questions can be interesting, they're not useful for the quizzes/exams and end up being pretty frustrating and time–consuming"

My Failures with AI Integration in Homework

  1. Assumed students wanted AI help (some of them didn't)
  2. Underestimated comfort level variation
  3. Focused on outsourcing computational mastery to AI (missed pedagogical impact)
What would you do differently?

Hint Scaffolding

I did not forbid AI use in all calculus homework problems (most of the grade came from in-class quizzes and exams), but I directed them to use AI for getting hints instead of complete solutions. However, I am not sure how widely this hint prompt was used. Prompt for students' use to get a hint on a problem:

3. Tool That Works Well: Copilot for Writing

Creating homework solutions (since Spring 2023): Copilot dramatically outperforms chat-based LLMs with real-time course correction.

Example workflow (expert use while leveraging AI speed):

  • I type or paste the problem statement
  • Copilot suggests: "Using Poisson distribution..."
  • I immediately delete and type: "Using Central Limit Theorem..."
  • Copilot pivots and continues with the correct approach

LaTeX ↔ Mathematica integration. Write solutions in with Copilot. Convert expressions to Mathematica for numerical verification (I have an AI prompt for that, too). Catch computational errors.

LLMs cannot compute reliably, but they can write about computations when properly guided

Example

(video sped up )

4. What Works Outside of Teaching (Example 1/2)

Email efficiency prompt: One-line stub → professional quick response (I hope it doesn't sound AI-generated). To use it, I select the email and my one-line stub, and send this to an LLM. Then the response is copied into my clipboard, where I insert it, compare, and verify before sending.

What Works Outside of Teaching (Example 1/2)

(video sped up )

Academic Advising AI Assistant (Example 2/2)

At UVA, faculty advise students on course selection, career paths, and academic requirements. I created a custom GPT loaded with department documents: course catalogs and prerequisites, degree requirements, common advising scenarios

Access was given to advisors, not students, as we want vetted advice passed on to students.

Cost: $20 one-time ChatGPT Plus subscription creates shareable bots accessible to anyone, permanently.

5. Research Visuals (https://lpetrov.cc/domino/)

6. What Doesn't Seem to Work Yet

  • AI for grading: I didn't try seriously using LLMs for grading assignments
    • Have you experimented with this? What were your results?

Pressing questions from my colleagues, where LLMs are mostly inadequate so far:

  • OCR for handwritten notes: Mass-converting math drafts into with good precision

  • Hand-drawn figure sketches to TikZ: Transforming rough diagrams into complete figures with LLM yields abysmal
    results. Works much better if you describe the figure in words.

What tools or approaches have you tried for these challenges?

The Uncomfortable Questions

Primary Fear: Are We Creating Dependent Learners?

  • LLM users (for essay writing) showed weakest brain connectivity (https://arxiv.org/abs/2506.08872)
  • When LLM users switched to "brain-only" writing, they exhibited under-engagement and reduced neural activity
  • Students report feeling "lost" when AI tools aren't available
  • Self-reported ownership of essays was lowest in the LLM group
  • LLM users struggled to accurately quote their own work

Are we widening the gap between AI-savvy and traditional students?
What happens when the AI is wrong and students lack expertise to know?
Is efficiency worth the pedagogical cost?

The Policy Discussion We Need Around Teaching

It helps to have an honest discussion with students (on the first day) about the AI policy in this class: what students are comfortable with? What should we allow, what should we not allow?
Having students invested in the decision helps.

  • How do we assess whether AI use enhances or replaces genuine learning?
  • What specific learning objectives require human-only demonstration?
  • How can we design assessments that remain meaningful in an AI-enabled world?
  • What training do students need to use AI tools responsibly and effectively?
  • How do we maintain academic integrity while embracing technological advancement?
  • What support systems do we need for students who choose not to use AI?
What would your approach be?

Thank You!

Continue the conversation