Skewed Dual Normal Distribution Model: Predicting 1D Touch Pointing Success Rate for Targets Near Screen Edges
About This Paper
Typical success-rate prediction models for tapping exclude targets near screen edges; however, design constraints often force such placements. Additionally, in scrollable UIs any element can move close to an edge. In this work, we model how target--edge distance affects 1D touch pointing accuracy. We propose the Skewed Dual Normal Distribution Model, which assumes the tap coordinate distribution is skewed by a nearby edge. The results of two smartphone experiments showed that, as targets approached the edge, the distribution's peak shifted toward the edge and its tail extended away. In contrast to prior reports, the success rate improved when the target touched the edge, suggesting a strategy of ``tapping the target together with the edge.'' By accounting for skew, our model predicts success rates across a wide range of conditions, including edge-adjacent targets, thus extending coverage to the whole screen and informing UI design support tools.
Welcome to another episode of ResearchPod. Sam, what are we diving into today?
This paper, by Nobuhito Kasahara, Shota Yamanaka, and Homei Miyashita, examines how the distance from a button on your phone screen to the edge affects whether people tap it successfully. The key puzzle is that older models for predicting tap accuracy ignore spots near the screen's borders, even though scrolling apps often push buttons there.
So this work is basically tackling why edges have been treated like no-go zones for accurate tapping?
Yes, exactly. In everyday apps, like lists you scroll through, any button can slide close to—or even touch—the top, bottom, left, or right edge. Prior models assume taps scatter evenly around the target center, like raindrops falling symmetrically on a road, so they fail to predict what happens near those borders, leaving designers without reliable guidelines for using the full screen.
That makes sense for scrollable UIs... but if edges mess up accuracy, why risk placing things there at all?
Designers avoid it when possible because studies show taps get less accurate as targets near an edge—your finger can't overshoot off-screen, so the scatter pattern changes. But in dense layouts or during scrolls, it's unavoidable, and without a good prediction model, tools can't suggest safe button sizes or positions across the whole display. This paper proposes a way to model that edge effect precisely.
Right, so the core problem is no math to forecast tap success when buttons hug the edge?
Precisely. They ran experiments where people tapped one-dimensional targets—strips along horizontal or vertical lines—getting closer to edges step by step. Unlike expectations, success rates actually rose when targets touched the edge, hinting users have a smart strategy, like using the edge itself to guide the tap.
Huh... so edges aren't always a failure zone. That shifts how we think about screen real estate.
They start by picturing taps as normally spread around the target center, like rain falling evenly on a flat road. But near an edge, your finger—being thick—can't land off-screen, so those would-be misses get pushed inward, piling up toward the edge and stretching the pattern lopsided, like rain against a curb forming a skewed puddle. The closer the target gets to the edge, the more it skews that way.
Okay, so the edge forces a bias toward itself... that explains the pile-up. But how does that predict if a tap hits or misses?
To figure the chance of success, they add up the probability across the target's area using the cumulative version of that skewed shape—essentially measuring the puddle inside the target bounds. They fit the model's dials—position shift, spread, and lopsidedness—directly from how far the target center sits from the edge.
Right, and users seem to lean into it... like deliberately hugging the edge?
Yes—the data shows the tap average shifts toward the edge in a curved pattern, matching a strategy where people tap the target right along the border to avoid errors on the open side. This lets the model forecast success about twice as well as older even-spread approaches for edge spots. The upshot is designers can now safely pack UIs across the full screen.
That's a meaningful extension... no more dead zones.
If edges can actually boost success in some cases, how do they capture that in their model?
They adjust three main features of the tap pattern using simple formulas tied straight to the distance from target center to edge. First, the average tap spot shifts in a curved path: for tiny targets right at the edge, it nudges away to avoid off-screen misses; but as the target widens while still hugging the edge, people aim to tap both target and edge together, pulling the average back toward the border—like using a wall to steady your shot in basketball. Far from the edge, it centers again. The spread narrows near edges since off-screen taps vanish, squeezing the pattern tighter; and the lopsided tilt strengthens closer in.
Okay, the 'tap with the edge' strategy explains that mean flip... but they tested this in actual experiments?
Yes, two setups with 15 students each tapping strips near left or bottom edges on a Pixel phone, using index finger from off-screen starts. Targets varied in gap to edge and size, with hundreds of tries per setup to measure real tap spots and hits. They cleaned outliers like extreme misses, then checked if patterns matched normal or skewed shapes—finding more lopsidedness up close, as the edge forces it.
Right, so the data backed the squeezing and shifting... how much better did their full model predict hits?
The formulas fit the shifts, squeezes, and tilts tightly, letting it predict success rates with a clear improvement over the even-spread prior—especially near edges where old ones underestimated hits from that strategy. Cross-checks on unseen setups held up well. This grounds UI tools to use every screen inch reliably.
Okay, so the skewed model outperforms the even one near edges... but how does it stack up against machine learning approaches that could just memorize patterns?
The paper compares it directly to several machine learning options, like neural nets and random forests, using fit quality and prediction error on held-out data. Tuned versions edged it slightly on raw numbers—a marginal gap of about two percent in fit—but those need thousands of internal settings and heavy tweaking. Their model uses just nine simple constants from basic formulas, making it far easier to understand and tweak.
Right, so interpretability wins for practical use... like telling designers exactly when skew stops mattering?
Exactly—for the lopsidedness, one constant combo pinpoints the distance where edge effects fade, acting like a clear safety threshold for layouts. Spread and shift formulas reveal how finger thickness squeezes the pattern near borders. This lets designers grasp why taps behave a certain way, without black-box guessing.
And the actual tap patterns in data—did they match this skewed shape closely?
Yes, plots of real taps versus model predictions show tight alignment, especially when targets touch the edge—the extreme pile-up toward the border is captured well. Mild skew holds for nearby spots, reverting to even farther out. Questionnaires confirm it—three participants said outright they tapped right at the edge-target line to dodge open-side slips, with data showing clusters there.
That's a solid close... interpretable math that matches real behavior. Designers get reliable tools now.
One thing that stood out was the bottom-edge setup... did gravity change how well the pieces fit?
In that test, people tapped vertical strips near the phone's bottom border, starting from a knee position to mimic natural holds. The lopsided tilt—stronger near the edge but flipping direction since the border was below—matched tightly, as did the squeezed spread. The position shift fit less cleanly than in the side-edge test, likely because gravity pulled fingers differently for some, making aims unstable. But overall success predictions held up.
Right, and against machine learning there... did the simple formulas still predict unseen cases better?
With default setups, their approach beat most on hold-out predictions, and even against tuned ones, it showed higher reliability for new bottom-edge scenarios. Far from the edge, around six millimeters out, the lopsidedness fades to zero, smoothly reverting to the even-spread pattern.
Okay, so no black box for designers... what did users say about their aiming?
Questionnaires echoed the first test—three admitted deliberately tapping right at the edge-target line to steady against slips. Six more said they aimed high, above the target, likely because the finger's contact point registers lower than where eyes see the aim landing.
That discrepancy explains some shifts... ties the strategy to real motor habits.
The paper suggests this math now lets tools forecast tap odds anywhere on screen from layout alone, turning edge zones from risks to usable space without guesswork. It focused on one-dimensional tasks for simplicity, so extending to full two-dimensional buttons needs further checks. Tests used right-handed index finger from off-screen, so thumb use, left hands, top or right edges, and corners require separate validation. Bezel shapes and cases might alter untappable areas or anchors, calling for re-estimation across devices.
Fair points on the limits... so for practical UI tools, what's the real payoff?
The model supplies guidelines like keeping targets over six millimeters from edges for even patterns, or embracing edge placement where success actually rises via that hugging tactic. Tools like Tappy or Tap Analyzer could integrate it to visualize success drops as buttons scroll near borders, auto-suggesting sizes or spots for over ninety-five percent reliability anywhere. Unlike tuned machine learning, its few interpretable constants let designers reason directly, no black box.
Well said, Sam. This gives designers a reliable way to use the full screen, grounded in how people actually tap. Thanks for joining me.