NEARLY OPTIMAL SPECTRAL GAPS FOR RANDOM BELYI SURFACES
About This Paper
In this paper, we show that a random hyperbolic surface in the Brooks-Makover model has a spectral gap greater than $\left(\frac{1}{4}-\left(\frac{1}{n}\right)^{\frac{1}{221}}\right)$, confirming the nearly optimal spectral gap conjecture in this model.
Welcome to another episode of ResearchPod. Sam, walk us through this paper you've been reading—what's it about?
The paper, by Yang Shen and Yunhui Wu, looks at random shapes called hyperbolic surfaces—think of them as doughnut-like objects with a special kind of bendy geometry where triangles have angles that add up to less than 180 degrees, unlike flat paper. These surfaces vibrate in certain ways, and researchers study the lowest vibration frequency to see how "rigid" the shape is. The key claim here is that in one model for making random versions of these, called the Brooks-Makover model, the lowest frequency gets very close to an ideal limit of 1/4 as the surfaces get bigger.
So this paper is basically asking if random hyperbolic surfaces can hit that near-perfect rigidity mark of about 1/4, which regular, hand-built ones can't quite reach?
Yes, exactly. For years, mathematicians knew no sequence of these surfaces could exceed λ₁ of 1/4—that's the label for that lowest vibration measure—but they wondered if random ones could get arbitrarily close. In three main random models, two were solved before: the Weil-Petersson and covering models now have proofs showing λ₁ above 1/4 minus a tiny error. But the Brooks-Makover model, built by gluing ideal triangles along edges of a random 3-regular graph with orientations, was the holdout—until this paper proves it too.
Right, so the core problem was that past methods worked for the other models but fell short here because of how Brooks-Makover constructs its surfaces from graphs and gluings?
Precisely. Brooks and Makover first showed a fixed positive gap years ago, but not this near-optimal one. This work adapts a polynomial method—essentially expanding averages of random math objects in powers of 1 over size—to prove the random representations converge strongly to a fixed pattern from the modular surface, blocking low eigenvalues. It closes the last gap in the conjecture across all three models.
But how do they actually build these surfaces in the Brooks-Makover model to make that convergence happen?
They begin with a random graph where every point connects to exactly three others, and they add directions to the edges around each point, like clockwise arrows. At each point, they swap it out for a copy of an ideal triangle—a shape in hyperbolic space with vertices stretching off to infinity, like three infinite funnels meeting at marked midpoints on their sides. The three half-edges from the point line up with the triangle's sides, matching the arrow direction. When two points connect by an edge, they glue those matching sides: midpoints together, and in a reversed way so both triangles' arrows flow consistently without twisting. This creates a surface full of those infinite funnel holes, called cusps.
Okay, so it's like precise origami with infinite-sided paper triangles, gluing midpoints to keep everything smooth and directed right?
Exactly. There's an equivalent recipe using shuffles of labels from 1 to 6n points. One shuffle pairs every label perfectly, like matching 3n pairs of socks. The other sorts them into 2n groups of three that cycle around. They label the sides of 2n ideal triangles using those groups, keeping the cycle order matching the triangle's natural flow. Then they glue the pairs the same way: midpoints aligned, orientations preserved. A math bijection shows this builds identical surfaces.
Huh. So why switch views? Does it help with the proof somehow?
It does. These shuffles define a group map from PSL(2,Z)—the modular group shaping the base modular surface, generated by an order-2 flip and order-3 rotation—to the full shuffle group on 6n labels. To close the cusps, they conformally compactify: smoothly fill the infinite holes, turning the open surface into a closed one without changing the core vibrations much. A key result shows the lowest vibration on this closed version beats a basic bound on the open one.
Right, so random builds mimic that modular base closely enough to inherit its strong rigidity.
Yes. They start by proving the open surface is a covering of the modular surface—like multiple stacked sheets of graph paper over a single base sheet, where every spot on the base has exactly 6n spots directly above it, and locally it looks identical. Paths on the base lift straight up to matching paths on the stack. This holds because both constructions tile the hyperbolic plane the same way, using a fixed curved triangle region called the fundamental domain that the modular group repeats to fill everything without gaps or overlaps.
So the permutations dictate how tiles upstairs connect to group actions downstairs?
Exactly. This defines a representation Π of PSL(2,Z): each group element acts by shifting the hyperbolic point and permuting labels via the tile map—like assigning moves in a board game to rotations of pieces. Π gives permutation matrices on 6n labels. The trace—the diagonal sum, counting fixed labels under Π(γ)—lets them compare to the regular representation, where traces are full size only for identity, near zero otherwise, spreading action evenly.
And since expectations match that closely for non-identity γ, how do they handle arbitrary elements in the group, not just the basics?
They consider words built from the generators—like strings of instructions using the order-2 flip and order-3 rotation. For each such word representing a group element γ, they count labels fixed by the permutation Π(ω). The paper shows the expected value equals a leading term times powers of 1 over size 6n, expanded as an infinite sum scaled by a measure of cycle imbalance. For non-identity γ, the leading term is scaled down sharply.
Okay, so it's like averaging how many spots stay put under a scrambled shuffle defined by the word.
Yes. They prove a precise truncation: the expectation minus the partial sum is controlled tiny by choosing the truncation length around log n. This polynomial expansion matches the regular representation's trace at leading orders—forcing the average trace to hug that limit closely. With explicit errors decaying superpolynomially fast, strong convergence holds uniformly over short words spanning the group, blocking eigenvalues below 1/4 minus a tiny bit. They turn those average trace approximations into tail bounds—showing the chance of deviation drops sharply. Combining with geometric lifting results gives the final theorem: with probability approaching one, the random surface's vibrations skip everything below that near-optimal limit.
Okay, so pulling it all together, this work shows random builds reliably hit close to the rigidity ceiling that hand-crafted ones can't touch. The paper suggests this seals the near-optimal gap.
Exactly. It paves the way for Ramanujan-type bounds on these surfaces and stronger quantitative rigidity statements in random models. The evidence points to a meaningful advance in understanding how randomness enforces geometric limits.
That's a solid wrap on the logic. Thanks, Sam—this has clarified why the random path closes the gap so effectively.
My pleasure, Alex. Thanks for listening to ResearchPod.