Binned and Unbinned Transverse Single Spin Asymmetry Extraction, including Background Subtraction and Unfolding
About This Paper
The determination of transverse single-spin asymmetries in experiments involving polarized targets and/or beams may encounter challenges when (1) the magnitude of the polarization varies greatly with time, (2) the polarization magnitude is not the same for each spin state, (3) different integrated luminosities occur for different spin states or different target materials, and/or (4) some kinematic variables require unfolding; these are just a few examples. We present general methods of determining the asymmetry based on both binned analysis and unbinned maximum likelihood optimization, incorporating the unfolding of kinematic variables that are smeared by detector effects, and also including the possibility of background subtraction.
Welcome to another episode of ResearchPod.
This paper, from researchers at New Mexico State University, develops methods to measure a specific effect in particle collisions called transverse single spin asymmetry. Imagine firing particles at a target that's spinning like a top, with the spin pointed sideways to the direction of travel. The way scattered particles spread out around a circle can show a lopsided pattern because of how spin interacts with motion—what physicists call transverse single spin asymmetry, or TSSA for short.
So this asymmetry reveals something about the underlying physics in these collisions? And the core problem is getting a clean measurement when real experiments get messy?
Yes, exactly. In these experiments, they use polarized targets or beams where the spin flips between "up" and "down" to cancel out errors. But challenges arise: the spin strength, or polarization, often drops over time or differs between up and down states; the total number of collisions, called luminosity, varies for each state; unwanted background events mix in; and detectors blur the exact angles due to smearing. The paper tackles how to extract the true TSSA despite these imbalances.
Right—like trying to spot a subtle pattern in data where the scales tip unevenly and there's noise everywhere.
That's a fair way to picture it. The central claim is that new binned and unbinned methods, plus ways to handle backgrounds and blurring, allow unbiased TSSA measurements under realistic lab conditions. We'll see how they use weights on individual events to balance exposures across spin states and subtract contaminants directly.
Weights on individual events—that sounds like a way to even things out without grouping the data. How exactly does that work in this unbinned approach you mentioned?
In a typical analysis, you'd sort events into bins, like dividing marbles into color-coded cups to count patterns. Here, they skip bins and treat each collision event separately, calculating the chance that event fits a certain asymmetry value, then combine all those chances mathematically to find the best overall value. They take the natural log of those probabilities—logs turn multiplication into addition, which is easier for computers—and adjust to maximize the total. Researchers call this process unbinned log-likelihood estimation.
Okay, so each event gets its own probability score, and you tweak the asymmetry until the whole set scores highest. But with unequal luminosities or polarizations, wouldn't some events pull harder than others?
Precisely. To fix that, they assign a weight to each event based on its spin state's luminosity and polarization—essentially down-weighting events from the state with more data or stronger spin, so every state contributes equally. Picture a seesaw with uneven loads on each side; these weights adjust the leverage to balance it perfectly.
Like normalizing votes from districts of different sizes to make the election fair.
Yes. For backgrounds, they estimate contaminant events from side regions and give those negative weights in the likelihood—directly subtracting their pull without separate steps. This lets the method pull out the true foreground asymmetry cleanly, even with imbalances, and the uncertainty comes from how sharply the likelihood peaks at the best fit.
Huh—so the weights handle both the balance and the cleanup in one go. Does this approximation hold only for small asymmetries, like they assume?
The paper shows it works well when asymmetries are small compared to 1, using a series expansion to simplify the math. Uncertainties account for weights properly, avoiding bias from varying exposures.
So the math approximation keeps things unbiased for typical small asymmetries. How did they check if this all holds up in practice—like with simulated data mimicking real experiments?
They created fake collision events to test everything. Each event gets a random spin direction—up or down—a random scattering angle around the circle, and a polarization strength that might differ between spins or change over time; then they decide if the detector catches it based on an efficiency function, which is just how well the setup spots events at different angles, like a basketball hoop that's uneven around the rim. To add the asymmetry effect, they compute a weight for each event that includes the true asymmetry pattern mixed with polarization and efficiency, and keep the event only if a random draw beats that weight—building in realistic lopsidedness from the start. Backgrounds get mixed in separately, with their own yields and patterns estimated from side regions.
Right, so these simulations include all the mess: uneven spins, different data amounts per spin, detector quirks, and contaminants. And the results from running the methods on them?
They ran each test 1000 times on four setups of growing complexity, like adding backgrounds with their own asymmetry, spin and data imbalances, or efficiency mimicking the signal shape. Both binned and unbinned methods recovered an average asymmetry very close to the true injected value—within about one expected uncertainty—and the spread across repeats matched the single-run error perfectly, showing no bias.
Huh—that confirms the weights really even out the imbalances without skewing things. What about cases where detector efficiency looks just like the physics signal they want to measure?
That's a tough test, using an efficiency with a cosine dependence matching the asymmetry's shape after spin. Without flipping spins between up and down periodically—which experiments do anyway to check for fakes—the method fails, pulling wrong values. But with flips, or when efficiencies don't mimic the signal, it works fine.
So spin flipping isn't just good practice—it's essential against sneaky detector effects. Makes the whole approach robust for real labs.
Smearing—meaning the measured angles blur from true ones?
Yes, like a photo slightly out of focus spreading sharp edges. The paper notes unfolding corrects that: remapping blurred data back to true distributions. They adapt it to unbinned likelihoods too, using tools like OmniFold—machine learning classifiers iteratively reweight simulated events to match observed smears, yielding truth-level asymmetries.
Truth-level asymmetries from reweighted simulations—that sidesteps the smearing issue cleverly. But how does OmniFold actually do the reweighting without bins, especially with smeared angles?
OmniFold starts by comparing what detectors see—blurry measured angles—with simulated blurry data from a model of true events passed through the detector. It trains a sorter, like a judge deciding which pile an item belongs to, to spot differences between the real blurry data and the simulated blurry version; this sorter outputs a score that acts as a weight to tweak the simulation closer to reality. They call this sorter a binary classifier. The key trick repeats: those weights get pulled back to adjust the true-level simulation, then pushed forward again, iterating until simulated blurry matches real data perfectly—yielding unbiased true asymmetries.
So it's like iteratively editing a blurry photo's original until the edited blur matches what the camera captured. They tested this on smeared fake data mimicking experiments?
Yes, with events smeared by adding random Gaussian wiggles to true angles—think slight random nudges spreading sharp positions, like ink bleeding on paper. Across 100 runs resampling events or retraining the network, extracted asymmetries hit near the injected value within uncertainties, and reweighted histograms overlaid the target closely, confirming no bias from smearing or imbalances.
Huh, so the iterative sorter fixes both blur and imbalances in one process. That seems like a solid way to get clean truth from messy measurements.
The methods enable clean TSSA pulls despite real-world drifts in spin strength, uneven data volumes, contaminants, and angle blurring—without needing perfect balance upfront. This paves the way for reliable measurements in high-data polarized collisions, revealing spin-motion links in particle physics more routinely.
Thanks, Sam, for breaking down the logic so clearly.
My pleasure, Alex. This work advances precise asymmetry studies step by step. Thanks for listening to ResearchPod.