MATH 8380: Random Matrices, Spring 2025

E-mail address for communication: lenia.petrov+rmt2025@gmail.com

Abstract

This course will explore a rich zoo of probabilistic models, with a focus on random matrices, dimer models, and statistical mechanical models of algebraic origin. Geometric structures that naturally arise in these contexts will also be discussed. The course content will be flexible and tailored to the interests of participants, making it ideal for most graduate students in the math department, second year and up.

A key feature of the course will be a "reading course" component, where weekly meetings provide focused, interactive learning sessions.

Some topics will repeat from the continuous probability-heavy course https://lpetrov.cc/rmt19/ but most of the content will be new.

Combined lecture notes in a single PDF

Course Meetings and notes

2:00-3:15pm, Kerchof 326

Regular Lectures on Wednesdays and some Mondays

Individual lecture notes (problem sets are at the end of each lecture):
  1. (Mon) January 13. Moments of random variables and random matrices
    Table of Contents
    • 1: Why study random matrices?
      • On the history.
      • Classical groups and Lie theory.
      • Toolbox.
      • Applications.
    • 2: Recall Central Limit Theorem
      • 2.1: Central Limit Theorem and examples
      • 2.2: Moments of the normal distribution
      • 2.3: Moments of sums of iid random variables
        • 2.3.1: Computation of moments
        • 2.3.2: \(n\)-dependent factor
        • 2.3.3: Combinatorial factor
        • 2.3.4: Putting it all together
      • 2.4: Convergence in distribution
    • 3: Random matrices and semicircle law
      • 3.1: Where can randomness in a matrix come from?
      • 3.2: Real Wigner matrices
      • 3.3: Empirical spectral distribution
      • 3.4: Expected moments of traces of random matrices
      • 3.5: Immediate next steps
    • A: Problems
      • A.1: Normal approximation
      • A.2: Convergence in distribution
      • A.3: Moments of sum justification
      • A.4: Distribution not determined by moments
      • A.5: Uniqueness of the normal distribution
      • A.6: Quaternions
      • A.7: Ensemble \(UD_\lambda U^\dagger\)
      • A.8: Invariance of the GOE
      • A.9: Counting \(n\)-powers in the real Wigner matrix
      • A.10: Counting trees
  2. (Wed) January 15. Wigner's semicircle law
    Table of Contents
    • 1: Recap
    • 2: Two computations
      • 2.1: Moments of the semicircle law
      • 2.2: Counting trees and Catalan numbers
    • 3: Analysis steps in the proof
      • 3.1: The semicircle distribution is determined by its moments
      • 3.2: Convergence to the semicircle law
        • 3.2.1: A concentration bound and the Borel--Cantelli lemma
        • 3.2.2: Tightness of \(\{\nu_n\}\) and subsequential limits
        • 3.2.3: Characterizing the limit measure
    • 4: Proof of Proposition (variance bound): bounding the variance
    • 5: Remark: Variants of the semicircle law
    • B: Problems
      • B.1: Standard formula
      • B.2: Tree profiles
      • B.3: Ballot problem
      • B.4: Reflection principle
      • B.5: Bounding probability in the proof
      • B.6: Almost sure convergence and convergence in probability
      • B.7: Wigner's semicircle law for complex Wigner matrices
      • B.8: Semicircle law without the moment condition
  3. (Wed) January 22. Gaussian and tridiagonal matrices
    Table of Contents
    • 1: Recap
    • 2: Gaussian ensembles
      • 2.1: Definitions
      • 2.2: Joint eigenvalue distribution for GOE
      • 2.3: Step A. Joint density of matrix entries
      • 2.4: Step B. Spectral decomposition
      • 2.5: Step C. Jacobian
        • Parametrizing \(\delta Q\).
        • Computing \(\delta W\).
        • Local structure of the map.
      • 2.6: Step D. Final Form of the density
    • 3: Other classical ensembles with explicit eigenvalue densities
      • 3.1: Wishart (Laguerre) ensemble
        • 3.1.1: Definition via SVD
        • 3.1.2: Joint density of eigenvalues
      • 3.2: Jacobi (MANOVA/CCA) ensemble
        • 3.2.1: Setup
        • 3.2.2: Jacobi ensemble
      • 3.3: General Pattern and \(\beta\)-Ensembles
    • 4: Tridiagonal form for real symmetric matrices
      • Step 1: Zeroing out subdiagonal entries in the first column.
      • Step 2: Inductive reduction on the trailing principal submatrix.
      • Step 3: Repeat for columns (and rows) 3, 4, etc.
    • 5: Tridiagonalization of random matrices
      • 5.1: Dumitriu–Edelman tridiagonal model for GOE
      • 5.2: Generalization to \(\beta\)-ensembles
    • C: Problems
      • C.1: Invariance of GOE and GUE
      • C.2: Preimage size for spectral decomposition
      • C.3: Distinct eigenvalues
      • C.4: Testing distinctness of eigenvalues via rank-1 perturbations
      • C.5: Jacobian for GUE
      • C.6: Normalization for GOE
      • C.7: Wishart eigenvalue density
      • C.8: Householder reflection properties
      • C.9: Distribution of the Householder vector in random tridiagonalization
      • C.10: Householder reflection for GUE
      • C.11: Jacobi ensemble is related to two Wisharts
  4. (Wed) January 29. Semicircle law for G$\beta$E via tridiagonalization. Beginning determinantal processes
    Table of Contents
    • 1: Recap
      • 1.1: Gaussian ensembles
      • 1.2: Tridiagonalization
    • 2: Tridiagonal random matrices
      • 2.1: Distribution of the tridiagonal form of the GOE
      • 2.2: Dumitriu--Edelman G\(\beta\)E tridiagonal random matrices
      • 2.3: The case \(\beta = 2\)
    • 3: Wigner semicircle law via tridiagonalization
      • 3.1: Moments for tridiagonal matrices
      • 3.2: Asymptotics of chi random variables
      • 3.3: Completing the proof: global semicircle behavior
    • 4: Wigner semicircle law via Stieltjes transform
      • 4.1: Tridiagonal structure and characteristic polynomials
        • 4.1.1: Three-term recurrence for the characteristic polynomial
        • 4.1.2: Spectral connection and eigenvalues
      • 4.2: Stieltjes transform / resolvent
      • 4.3: Approach via continued fractions
    • 5: Determinantal point processes (discrete)
    • 6: Application of determinantal processes to random matrices at \(\beta = 2\)
      • 6.1: Local eigenvalue statistics (bulk and edge scaling limits)
      • 6.2: Correlation functions and densities
      • 6.3: Poisson process example
    • D: Problems
      • D.1: Eigenvalue density of G\(\beta\)E
      • D.2: Chi-square mean and variance
      • D.3: Edge contributions in the tridiagonal moment computation
      • D.4: Hermite polynomials and three-term recurrence
      • D.5 (unnumbered title)
      • D.6: Gap probabilities
      • D.7: Stieltjes transform approach for tridiagonal matrices
  5. (Wed) February 5. Determinantal Point Processes and the GUE
    Table of Contents
    • 1: Recap
    • 2: Discrete determinantal point processes
      • 2.1: Definition and basic properties
    • 3: Determinantal structure in the GUE
      • 3.1: Correlation functions as densities with respect to Lebesgue measure
      • 3.2: The GUE eigenvalues as DPP
        • 3.2.1: Setup
        • 3.2.2: Writing the Vandermonde as a determinant
        • 3.2.3: Orthogonalization by linear operations
        • 3.2.4: Rewriting the density in determinantal form
      • 3.3: Christoffel--Darboux formula
    • E: Problems
      • E.1: Gap Probability for Discrete DPPs
      • E.2: Generating Functions for Multiplicative Statistics
      • E.3: Variance
      • E.4: Formula for the Hermite polynomials
      • E.5: Generating function for the Hermite polynomials
      • E.6: Projection Property of the GUE Kernel
      • E.7: Recurrence Relation for the Hermite Polynomials
      • E.8: Differential Equation for the Hermite Polynomials
      • E.9: Norm of the Hermite Polynomials
      • E.10: Existence of Determinantal Point Processes with a Given Kernel
  6. (Wed) February 19. Double contour integral kernel. Steepest descent and semicircle law
    Table of Contents
    • 1: Recap: Determinantal structure of the GUE
    • 2: Double Contour Integral Representation for the GUE Kernel
      • 2.1: One contour integral representation for Hermite polynomials
      • 2.2: Another contour integral representation for Hermite polynomials
      • 2.3: Normalization of Hermite polynomials
      • 2.4: Double contour integral representation for the GUE kernel
      • 2.5: Conjugation of the kernel
      • 2.6: Extensions
    • 3: Steepest descent — generalities for single integrals
      • 3.1: Setup
      • 3.2: Saddle points and steepest descent paths
      • 3.3: Local asymptotic evaluation near a saddle point
    • 4: Steepest descent for the GUE kernel
      • 4.1: Scaling
      • 4.2: Critical points
      • 4.3: Imaginary critical points: \(\lvert X\rvert < 2\), "bulk"
    • F: Problems
      • F.1: Different global positions
      • F.2: Sine kernel
      • F.3: Discrete sine process
  7. (Mon) February 24. Steepest descent and local statistics. Cutting corners
    Table of Contents
    • 1: Steepest descent for the GUE kernel
      • 1.1: Recap
      • 1.2: Scaling
      • 1.3: Critical points
      • 1.4: Imaginary critical points: \(\lvert X\rvert < 2\), "bulk"
      • 1.5: Real critical points: \(\lvert X\rvert > 2\), "large deviations"
      • 1.6: Double critical point: \(\lvert X\rvert=2\), "edge"
      • 1.7: Airy kernel, Tracy–Widom distribution, and convergence of the maximal eigenvalue
      • 1.8: Remark: what happens for general \(\beta\)?
    • 2: Cutting corners: setup
    • 3: Corners of Hermitian matrices
      • 3.1: Principal corners
      • 3.2: Interlacing
      • 3.3: Orbital measure
    • 4: Polynomial equation and joint distribution
      • 4.1: Derivation
      • 4.2: Inductive nature of the transition
      • 4.3: Case \(\beta = \infty\)
    • G: Problems
      • G.1: General bulk case
      • G.2: Large deviations
      • G.3: Airy kernel
      • G.4: Interlacing proof
  8. (Wed) February 26. Cutting corners and loop equations
    Table of Contents
    • 1: Cutting corners: polynomial equation and distribution
      • 1.1: Recap: polynomial equation
      • 1.2: Extension to general \(\beta\)
      • 1.3: Distribution of the eigenvalues of the corners
    • 2: Loop equations
      • 2.1: Formulation
      • 2.2: Proof of Theorem (loop equation) for \(\beta > 2\)
    • 3: Applications of loop equations
      • 3.1: Stieltjes transform equations
      • 3.2: Asymptotic behavior
      • 3.3: Example: G\(\beta\)E and the semicircle law
    • H: Problems
      • H.1: Cauchy determinant
      • H.2: Jacobian from \(n-1\) to \(n\) dependent variables
      • H.3: Dirichlet density
      • H.4: General beta Gaussian density and cutting corners
      • H.5: General \(\beta\) Corners Process Simulation
  9. (Wed) March 5. Loop equations and asymptotics to Gaussian Free Field
    Table of Contents
    • 1: Recap
      • (Dynamical) loop equations
      • Loop equations for \(W=0\)
      • The full corners process
      • Example: G\(\beta\)E and the semicircle law
    • 2: Gaussian Free Field
      • 2.1: Gaussian correlated vectors and random fields
      • 2.2: Gaussian fields as random generalized functions
      • 2.3: Concrete treatment via orthogonal functions
      • 2.4: Connection to Brownian bridge
      • 2.5: Covariance structure and Green's function
      • 2.6: The GFF on the upper half-plane
    • 3: Fluctuations
      • 3.1: Height function and related definitions
      • 3.2: Main results on Gaussian fluctuations
      • 3.3: Deformed ensemble
      • 3.4: Wiener-Hopf like factorization
      • 3.5: First order asymptotics of \(\mathcal{A}(z)\)
      • 3.6: Outlook of further steps
    • I: Problems
      • I.1: Brownian bridge
  10. (Mon) March 24. Dyson Brownian Motion
    Table of Contents
    • 1: Motivations
      • 1.1: Why introduce time?
      • 1.2: Simple example: 1×1 case
    • 2: Matrix Brownian motion and its eigenvalues
      • 2.1: Definition
      • 2.2: Eigenvalues as Markov process
    • 3: Dyson Brownian Motion
      • 3.1: Stochastic differential equations — an informal introduction
        • Summary
      • 3.2: Heuristic derivation of the SDE for the Dyson Brownian Motion
    • 4: Mapping the G\(\beta\)E densities with the Dyson Brownian Motion
    • 5: Determinantal structure for \(\beta = 2\)
    • 6: Harish-Chandra–Itzykson–Zuber (HCIZ) integral
      • 6.1: Statement of the HCIZ formula
      • 6.2: Reduction to the diagonal case
      • 6.3: Symmetry
      • 6.4: Conclusion of the argument
    • J: Problems
      • J.1: Collisions
      • J.2: Estimate on the modulus of continuity
      • J.3: Generator for Dyson Brownian Motion
      • J.4: Constant in the HCIZ formula
  11. (Wed) March 26. Asymptotics of Dyson Brownian Motion with an outlier
    Table of Contents
    • 1: Recap
      • 1.1: Dyson Brownian Motion (DBM)
      • 1.2: Eigenvalue SDE
      • 1.3: Preservation of G\(\boldsymbol{\beta}\)E density
      • 1.4: Harish–Chandra–Itzykson–Zuber (HCIZ) integral
    • 2: Optional: proof of HCIZ integral via representation theory
    • 3: Determinantal structure for \(\beta = 2\)
      • 3.1: Transition density
      • 3.2: Determinantal correlations
      • 3.3: On the proof of determinantal structure
    • 4: Asymptotic analysis: signal plus noise
      • 4.1: Setup
      • 4.2: Outline of the steepest descent approach
      • 4.3: Asymptotics
      • 4.4: Airy kernel
      • 4.5: BBP transition and the deformed Airy kernel
      • 4.6: Gaussian regime
      • 4.7: Matching Fredholm determinant to the Gaussian distribution
    • K: Problems
      • K.1: Biorthogonal ensembles
      • K.2: Scaling of the kernel
      • K.3: Gaussian regime and integration contours
      • K.4: Gaussian kernel
      • K.5: GUE kernel
  12. (Wed) April 2. Random Growth Models
    Table of Contents
    • 1: Recap
      • 1.1: Dyson Brownian Motion with Determinantal Structure
      • 1.2: The BBP Phase Transition
      • 1.3: Remark: Corners process with outliers
      • 1.4: Goal today
    • 2: A window into universality: Airy line ensemble
    • 3: KPZ universality class: Scaling and fluctuations
      • 3.1: Universality of random growth
      • 3.2: KPZ equation
      • 3.3: First discoveries
      • 3.4: Effect of initial conditions
      • 3.5: Remark: Gaussian Free Field in KPZ universality
    • 4: Polynuclear Growth and Last Passage Percolation
      • 4.1: Definition and single-layer PNG
      • 4.2: Multiline PNG
      • 4.3: KPZ mechanisms in the PNG growth
      • 4.4: Last Passage Percolation (LPP)
      • 4.5: Topics to continue
    • L: Problems
      • L.1: PNG ordering
      • L.2: PNG and last passage percolation
  13. (Wed) April 9. Matching Random Matrices to Random Growth I
    Table of Contents
    • 1: Recap
    • 2: The spiked Wishart ensemble
      • 2.1: Definition of the spiked Wishart process
      • 2.2: Markov chain and transition kernel for eigenvalues
    • 3: The exponential LPP model
    • 4: Geometric LPP and Robinson-Schensted-Knuth correspondence
      • 4.1: Geometric LPP
      • 4.2: Bijective mapping of arrays via toggles
      • 4.3: Weight preservation
    • M: Problems
      • M.1: Wishart Markov chain
      • M.2: Interlacing
      • M.3: Gibbs property
      • M.4: Transition kernels integrate to one
      • M.5: Distribution of the eigenvalues
      • M.6: Weight preservation under toggle
      • M.7: RSK independence of order
      • M.8: Asymptotics: BBP phase transition
  14. (Wed) April 16. Matching Random Matrices to Random Growth II
    Table of Contents
    • 1: Recap
      • 1.1: Main goal
      • 1.2: Spiked Wishart ensembles and the largest eigenvalue process
      • 1.3: Inhomogeneous last-passage percolation
      • 1.4: RSK via toggles: definitions and weight preservation
    • 2: Distributions of last-passage times in geometric LPP
      • 2.1: Matching RSK to last-passage percolation
      • 2.2: Distribution in RSK
      • 2.3: Conditional law in RSK
    • 3: Passage to the continuous limit
      • 3.1: Key elementary lemma
      • 3.2: Scaling the environment W
      • 3.3: Scaling the Schur polynomials
      • 3.4: Scaling the transition formula
        • Prefactor
        • Exponential term
        • Ratio of Schur polynomials
        • Combining the terms
      • 3.5: Conclusion
    • 4: PushTASEP in the geometric LPP model
    • N: Problems
      • N.1: Non-Markovianity
      • N.2: Schur polynomials — equivalence of definitions
      • N.3: Schur polynomials — stability property
      • N.4: Cauchy identity for Schur polynomials
  15. (Wed) April 23. Random Matrices and Topology
    Table of Contents
    • 1: Introduction
    • 2: Gluing polygons into surfaces
      • 2.1: Gluing edges of a polygon
      • 2.2: Starting to count
      • 2.3: Dual picture
      • 2.4: Notation
    • 3: Harer–Zagier formula (statement)
    • 4: Gaussian integrals and Wick formula
      • 4.1: The standard one–dimensional Gaussian measure
      • 4.2: Gaussian measures on R^k
        • Basic facts.
      • 4.3: Wick (Isserlis) formula
    • 5: GUE integrals and gluing polygons
      • 5.1: Traces of powers, again
      • 5.2: Computing traces of powers
      • 5.3: Proof of Harer–Zagier formula
    • 6: Going further: Multi-matrix models
      • 6.1: Maps with several faces and Feynman diagrams
      • 6.2: Two–matrix model and the Ising interaction
    • O: Problems
      • O.1: Gluing a Sphere
      • O.2: Wick's formula for affine functions
      • O.3: GOE and non-orientable surfaces
Interactive simulations of some random matrix properties are available at this link .

Student Presentations on Mondays in April

  1. (Mon) April 7 (2:05-3:15pm, Kerchof 326)
    • Declan Stacy — Doob's $h$-transform and Dyson Brownian motion at $\beta=2$
    • Suren Kyurumyan — Orthogonal polynomials and asymptotics of Hermite kernel
  2. (Mon) April 14 (2:05-3:15pm, Kerchof 326)
    • Annika Kelly — Numerical linear algebra and random matrices
    • Jun Park — Stochastic Airy operator and edge limit of Gaussian Beta ensembles
  3. (Mon) April 21 (2:05-3:15pm, Kerchof 326)
  4. (Mon) April 28 (2:05-3:15pm, Kerchof 326)

Weekly individual meetings

The course spans 12 full weeks, excluding the first week, final week, and the week following Spring break when I will be traveling. Students are required to meet with me at least 8 times during these 12 weeks, with each meeting lasting approximately 45 minutes. (In-person is strongly preferred, but a zoom option is available.) These meetings are an integral part of the reading course, during which we can discuss lectures, homework problem solutions, presentation, explore research topics, or dig into any other relevant topics related to random matrices.

During the first week of class, we will establish a regular weekly meeting time for each student. While some flexibility is possible, the goal is to maintain a consistent weekly schedule throughout the semester.

Books

  1. Mehta, M.L. "Random Matrices". A first textbook that approaches the subject through the lens of theoretical physics.
  2. Anderson, G.W., Guionnet, A. and Zeitouni, O. "An Introduction to Random Matrices". A comprehensive treatment that emphasizes probabilistic methods and stochastic analysis.
  3. Pastur, L. and Shcherbina, M. "Eigenvalue Distribution of Large Random Matrices". An analytical perspective on random matrix theory, focusing on mathematical techniques.
  4. Muirhead, R.J. "Aspects of Multivariate Statistical Theory". A statistical approach to random matrices.
  5. Vershynin, R. "High-dimensional probability: An introduction with applications in data science". A contemporary work bridging random matrix theory with modern data science applications.
  6. Baik, J., Deift, P., and Suidan, T. "Combinatorics and Random Matrix Theory". Explores the connections between random matrices and combinatorial probability, particularly in asymptotic problems.
  7. Forrester, P.J. "Log-Gases and Random Matrices". A comprehensive reference work containing extensive collections of explicit formulas and results.
  8. Akemann, G., Baik, J., and Di Francesco, P. (editors). "The Oxford Handbook of Random Matrix Theory". A curated collection featuring diverse perspectives from experts across the field's many applications.
  9. Tao, T. "Topics in Random Matrix Theory". A pedagogical approach derived from graduate-level teaching and academic blog content.
  10. Potters, M., and Bouchaud, J.P. "A First Course in Random Matrix Theory for Physicists, Engineers and Data Scientists". An accessible introduction prioritizing intuition and applications over rigorous proofs.
  11. Bai, Z.D., and Silverstein, J.W. "Spectral Analysis of Large Dimensional Random Matrices". Focuses on the spectral properties of high-dimensional random matrices.

Lecture notes by various authors

  1. V. Gorin
  2. B. Valkó
  3. T. Tao
  4. F. Rezakhanlou
  5. M. Krishnapur
Additional references are included in lecture notes.

Assessment

The course grade is based roughly equally on three components:

1. Homework Problems

2. Presentation

3. Weekly Meetings

Policies

Approved accommodations

All students with special needs requiring accommodations should present the appropriate paperwork from the Student Disability Access Center (SDAC). It is the student's responsibility to present this paperwork in a timely fashion and follow up with the instructor about the accommodations being offered. Accommodations for midterms or final exams (e.g., extended time) should be arranged at least 5 days before an exam.

Honor code

The University of Virginia Honor Code applies to this class and is taken seriously. Any honor code violations will be referred to the Honor Committee.

Collaboration on homework assignments

Group work on homework problems is allowed and strongly encouraged. Discussions are in general very helpful and inspiring. However, before talking to others, get well started on the problems, and contribute your fair share to the process. When completing the written homework assignments, everyone must write up his or her own solutions in their own words, and cite any reference (other than the textbook and class notes) that you use. Quotations and citations are part of the Honor Code for both UVA and the whole academic community.