Introduction
In 2025, Interaction to Next Paint (INP) officially replaces First Input Delay (FID) as the key metric for measuring web responsiveness. While FID focused on a site’s first interaction, INP evaluates all user inputs, reflecting the real experience of responsiveness throughout a session.
For developers and marketers, this shift is monumental. Sites can no longer look fast—they must feel fast. INP measures the moment between a user action (such as a click or tap) and the time the browser visually responds. A slow INP gives users the impression of lag, hesitation, and friction.
This case study explores how a real website—an e-commerce business with dynamic product listings and heavy JavaScript usage—transformed its performance by reducing INP from a “Poor” 480 milliseconds to a “Good” 160 milliseconds. Along the way, we’ll examine every optimization step, the reasoning behind it, and how it impacted real-world SEO and conversions.
The Website Before Optimization
Before optimization, the client’s site performed reasonably well in terms of visual load times but struggled with interactivity. Product filters lagged, add-to-cart clicks felt delayed, and navigation buttons sometimes took nearly half a second to respond.
Initial Metrics
When tested with PageSpeed Insights and the Chrome UX Report, the site produced these baseline scores:
- INP: 480 ms (Poor)
- FID: 140 ms
- LCP: 3.1 s
- CLS: 0.08
- Total Blocking Time (TBT): 730 ms
The page loaded fully within four seconds, but users felt it was slower because interactions took too long to trigger a response.
User Experience Issues
Session recordings and analytics told a clear story:
- Bounce rate hovered around 58 %.
- Mobile conversions lagged behind desktop by 40 %.
- Users frequently abandoned filters mid-session.
Key Bottlenecks Identified
- Excessive JavaScript execution from tracking libraries and carousel components.
- Long main-thread tasks that prevented timely input handling.
- Unoptimized third-party scripts running early in the critical path.
- Inefficient event listeners reacting to scroll and resize events.
Before diving into fixes, the development team needed to diagnose exactly how and where these delays occurred.
Diagnosing the Problem
Tools Used
The optimization process began with a detailed performance audit using:
- Google Lighthouse for lab data and recommendations.
- PageSpeed Insights for field data reflecting real users.
- WebPageTest for waterfall analysis and script timing.
- Chrome DevTools Performance Panel to identify long tasks.
Findings
The largest delays appeared during heavy JavaScript parsing and execution. Several third-party scripts—including a chat widget, two analytics tags, and a legacy carousel—were blocking the main thread.
The audit also revealed multiple event handlers triggering unnecessary reflows and repaints during user interactions.
Goal Setting
The target was clear:
- INP goal: ≤ 200 ms
- TBT goal: ≤ 200 ms
- Maintain LCP < 2.5 s and CLS < 0.1
Achieving this required focusing on interactivity rather than just loading speed.
The Optimization Process
The project unfolded over four weeks in five major phases. Each addressed a different source of input delay.
Step 1 — Reducing Main-Thread Work
Profiling revealed several functions exceeding 300 ms of blocking time. The team restructured them into smaller asynchronous tasks and replaced synchronous loops with non-blocking patterns.
They also split the JavaScript bundle into smaller chunks loaded only when specific sections were visible. Non-essential scripts were deferred until idle browser periods.
Outcome: Total Blocking Time dropped from 730 ms to 310 ms, instantly reducing measured INP to around 320 ms.
Step 2 — Streamlining JavaScript Execution
Next, the developers reviewed every imported library. Outdated UI components and unused dependencies were removed, reducing the total JavaScript payload from 1.2 MB to 680 KB.
Modern build tools minimized parsing overhead and eliminated redundant code paths. The checkout script was further optimized by deferring discount calculations until user confirmation rather than on every keystroke.
Outcome: The main thread freed up faster, trimming another 80 ms off INP.
Step 3 — Deferring Non-Critical Tasks
Many third-party scripts were executing during initial page load—before any user interaction occurred. The team categorized scripts as critical (required for interactivity) or non-critical (marketing, tracking, widgets).
Non-critical scripts were delayed until after the first contentful paint or moved to run during idle time. Lazy-loading analytics and deferring chat widgets alone saved nearly 100 ms of input delay.
Outcome: INP improved to approximately 230 ms; the site now crossed from “Needs Improvement” to almost “Good.”
Step 4 — Improving Rendering and Style Calculations
Rendering performance was addressed next. Repaints triggered by dynamic filters caused visible lag when users selected product options. The team stabilized the layout by assigning fixed heights and widths to image containers and pre-allocating space for dynamic content.
CSS rules were refactored to avoid frequent recalculations, and GPU-accelerated animations replaced CPU-heavy transitions.
Outcome: Visual feedback became nearly instantaneous; INP dipped below 200 ms on most interactions.
Step 5 — Managing Third-Party Scripts
Third-party scripts accounted for nearly 40 % of main-thread blocking time. After a cost-benefit review, non-essential scripts were removed or loaded asynchronously. Only one consolidated analytics tag remained.
Outcome: Stable, predictable interactivity and a final INP of 160 ms—classified as “Good” under Google’s Core Web Vitals thresholds.
The Results
Performance Comparison
| Metric | Before | After | Improvement |
|---|---|---|---|
| INP | 480 ms (Poor) | 160 ms (Good) | 66 % faster |
| TBT | 730 ms | 180 ms | 75 % reduction |
| LCP | 3.1 s | 2.2 s | 29 % faster |
| CLS | 0.08 | 0.05 | 37 % more stable |
User Experience Gains
Post-optimization analytics revealed substantial improvements:
- Bounce rate dropped from 58 % to 40 %.
- Mobile conversions rose by 27 %.
- Average session duration increased by 22 %.
- Customer support tickets about “slow site” complaints fell by 50 %.
SEO and Core Web Vitals Impact
Within one month of deployment, Google Search Console reported a Core Web Vitals pass rate increase from 71 % to 96 %. Organic impressions grew steadily, suggesting that improved INP positively influenced rankings.
Lessons Learned
- Performance Is Cumulative.
Improving INP required addressing multiple layers—JavaScript, rendering, and third-party scripts. No single tweak produced a miracle; incremental gains added up to major improvements. - Real User Data Matters.
Lab tests are useful, but field data from Chrome UX Report and real monitoring revealed issues that synthetic tests missed. Always validate lab optimizations with live metrics. - Small Tasks Make Big Differences.
Breaking large processes into microtasks gave the browser room to breathe, turning sluggish responses into near-instant interactions. - Collaboration Is Essential.
Designers, developers, and marketers had to align priorities. Removing a marketing tag can feel risky, but when backed by data showing faster user interactions and higher conversions, it becomes an easy decision. - Performance Is a Continuous Effort.
Websites evolve, and new features often introduce fresh bottlenecks. Setting up automated performance monitoring ensures INP stays within target even after future releases.
How You Can Replicate These Results
- Start with an Audit.
Use Lighthouse, PageSpeed Insights, and DevTools to measure INP, TBT, and long tasks. Document your baseline before making any changes. - Identify High-Impact Fixes First.
Target main-thread blocking scripts and long JavaScript tasks before tackling smaller issues. - Separate Critical and Non-Critical Work.
Prioritize functionality that affects first user interactions. Defer analytics, ads, and widgets until later. - Test on Real Devices.
A site that feels fast on desktop might lag on mid-range mobile hardware. Use field data to confirm improvements across all environments. - Adopt a Performance Culture.
Incorporate performance checks into your CI/CD pipeline and make Core Web Vitals a shared KPI. - Use the Right Tools Continuously.
Tools like SpeedCurve, New Relic, and RUM analytics can automatically monitor INP, alerting your team when regressions occur.
Conclusion
This case study demonstrates that INP improvement is achievable for any site willing to prioritize responsiveness. By reducing main-thread work, streamlining JavaScript, deferring unnecessary tasks, optimizing rendering, and managing third-party scripts, this website transformed both its user experience and Core Web Vitals performance.
From an INP of 480 milliseconds to a lightning-fast 160 milliseconds, the journey underscores a powerful truth: responsiveness isn’t just a technical metric—it’s a business advantage.
As Google continues to refine ranking signals around Core Web Vitals in 2025, INP will play a defining role. Make it a routine part of your performance strategy, and your site will not only meet Google’s standards but exceed user expectations