Just as we get used to Google’s revitalized and repacked key metrics, Core Web Vitals, the tech giant is thinking of shaking up the metric pack.
It is thought that Google is developing a brand new metric for responsiveness, which could spell trouble for First Input Delay, which is likely to be replaced.
Recently, HTTP Archive Almanac wrote a story about content management systems and the writer, who was a back-end manager and head of web performance at Wix, mentioned that all CMS platforms have a First Input Delay score, one of the Core Web Vitals.
Yet the writer also mentioned that Google is currently working on a new metric to replace First Input Delay.
The reason for this is that the impact of First Input Delay has lost a lot of its meaning – the poor metric.
First Input Delay
Google’s Core Web Vitals provide a clear snapshot of the digital health of a webpage. They consist of Largest Contentful Paint, Cumulative Layout Shift, and First Input Delay.
Specifically, First Input Delay measures the speed at which a browser can respond to user interaction with a website. For example, how long it takes for a page to respond when a user clicks a button.
However, the issue with First Input Delay is that all of the major content management systems (such as WordPress, Wix, and Drupal) easily surpass Google’s benchmark for the metric, given that they register very fast FID scores.
It has become a competition where everybody wins and thus nobody wins and nobody gets credit for sharing the gold medal.
“FID is very good for most CMSs on desktop, with all platforms scoring a perfect 100%,” said the writer. “Most CMSs also deliver a good mobile FID of over 90%, except Bitrix and Joomla with only 83% and 85% of origins having a good FID.”
If 99% of content management systems are awarded top scores, what separates them? What is the use of a metric that cannot distinguish anything?
After all, the essence of Core Web Vitals is, in part, to improve aspects of the digital user experience.
“The fact that almost all platforms manage to deliver a good FID,” the writer added, “has recently raised questions about the strictness of this metric. The Chrome team recently published an article, which detailed the thoughts towards having a better responsiveness metric in the future.”
So what is this potential new metric?
A recent article published by Google points towards an experimental responsiveness metric. It revealed the inner workings at Google when it comes to a new input delay metric.
Understanding this experimental metric can help website owners and managers better prepare for what may come in the future and what may become part of the Core Web Vitals group.
First response or full event?
Key to this potential metric is the fact that it wouldn’t only measure a single interaction. Instead, it would measure groups of individual interactions that are part of the action of a user, and so would gather more complex and nuanced data and thus more meaningful scores.
One article on Google’s Web.dev website stated the goals of this new metric: “Consider the responsiveness of all user inputs (not just the first one). Capture each event’s full duration (not just the delay). Group events together that occur as part of the same logical user interaction and define that interaction’s latency as the max duration of all its events.”
And finally: “Create an aggregate score for all interactions that occur on a page, throughout its full lifecycle.”
Ultimately, its aim is to have a more meaningful metric that can capture the quality of the digital user experience.
“We want to design a metric that better captures the end-to-end latency of individual events and offers a more holistic picture of the overall responsiveness of a page throughout its lifetime,” the article went on to say.
“With this new metric we plan to expand that to capture the full event duration, from initial user input until the next frame is painted after all the event handlers have run.
We also plan to measure interactions rather than individual events. Interactions are groups of events that are dispatched as part of the same, logical user gesture (for example: pointerdown, click, pointerup).”
Expanding on the point: “The event duration is meant to be the time from the event hardware timestamp to the time when the next paint is performed after the event is handled.
“But if the event doesn’t cause any update, the duration will be the time from event hardware timestamp to the time when we are sure it will not cause any update.”
There are two possible pathways from here when it comes to better measuring interaction latency: maximum event duration or total event duration.
Maximum event duration is a type of interaction that consists of multiple events of varying durations. And the maximum event duration measurement is based on the largest duration out of a group.
Total event duration is the total sum of all these event durations.
Time will tell whether First Input Delay is set for the sidelines, to be replaced by a newer, more meaningful metric. Wath this space.