Many marketers have bristled at the MRC standard of viewability requiring that at least 50 percent of its pixels be in view for at least one second (for video, 50 percent of a player must be in view for at least two seconds).
Increasingly, brands want to know how long their ads were viewed. Not everyone agrees how or even if time-in-view should be a guiding metric, much less what the threshold should be.
Time-based metrics aren’t a new concept. The industry knows that served ads that aren’t viewed are a massive waste of budget — as much as $7.4 billion on display ads alone in 2016, according to Forrester. Media buyers have shown interest in increasing the basic requirements. In 2017, the world’s biggest media buyer, WPP’s GroupM heightened its standards, stipulating that 100 percent of pixels be in view for at least a full second.
The industry has responded on several fronts.
In September, IAS included time-in-view metrics in its media quality report and that same week, Google announced a way for advertisers to set custom viewability criteria for reporting in its enterprise platform Display & Video 360. Publishers such as the Financial Times and the Guardian are offering time-based options, and ad tech players such as Parsec offer time-based buying.
Still, there remains debate among marketers over viewability and time-in-view metrics and their value. One way that marketers can use time-based metrics to optimize their campaigns is by setting time-in-view as an benchmark, or early conversion, according to Chris Mechanic, CEO and Co-founder of digital agency Webmechanix.
“We like to use and optimize against users that watched at least half of the video so that we can identify the most interested users,” Mechanic said.
This story first appeared on Marketing Land.