Arbitron’s new Portable People Meter (PPM) has the potential to significantly advance how radio program directors use Arbitron data to analyze audience response to programming and promotion. A sliver of that potential is being revealed as Arbitron rolls PPM out for commercial use. It also looks like it could take a few years for researchers and programmers to fully understand what all these new data mean for public radio, especially since PPM is not rolling out all at once.
One of the first PPM findings is that most stations have more Cume audience than reported by the diaries. Arbitron’s VP for PPM Sales, John Snyder, gave an excellent presentation on this topic at Arbitron’s recent consultant fly-in.
Some of the additional Cume audience comes from actual listeners not previously captured by the diaries. But Snyder also demonstrated how some of the additional Cume is made up of people who are merely temporarily “exposed” to a station. For some stations, this “exposed Cume” is made up of a large number of people outside of the station’s target audience. That makes sense if you think about all of the radio stations you are exposed to when shopping, getting a haircut, and the like. In this regard, PPM is adding “noise” to the programming analysis process. These exposed listeners aren’t tuned-in by choice. Knowing how to filter out that noise will become an essential skill.
Perhaps the most interesting aspect of Snyder’s analysis came when looking at Core listeners. These listeners spend more time with your station than any other. Previously, that definition was limited to a week of listening. Snyder demonstrated how some stations gather almost all their Core listeners over the course of a week while other stations gather their Core over the course of a month. Fringe by week, Core by month. This is an important idea, one that we’ve touched on before at radiosutton; PPM is offering insights about how listeners use stations over time. That’s important to public radio because the long-term relationship with listeners is what drives giving.
Most of the initial PPM programming analyses are not going to focus on the long-term listening patterns of your audience. Several times during the fly-in, key Arbitron representatives spoke of analyzing smaller, more short-term, chunks of data. We were told ad agencies are interested in “commercial-level” ratings, meaning that advertisers want to know the audience for specific commercials. Arbitron President Steve Morris thinks minute-by-minute analyses of programming using PPM is key to radio’s future.
This push to drill down for a microscopic view, rather than step back and get a bigger picture of station’s overall performance and service, will actually cloud our understanding of audiences before it clarifies it. No one will know what changes in numbers actually mean until repeating patterns emerge in the data. Here’s an example of what you might see when analyzing minute-by-minute data for Morning Edition in a market the size of Houston, one of the PPM test markets.
Monday, 7:04am – 10 PPM panelists tuned to your station as the first NPR newscast segment ends.
Monday, 7:05am – 11 PPM panelists tuned to your station during local underwriting credits.
Monday, 7:06am – 11 PPM panelists tuned to your station during local news.
Monday, 7:08am – 9 PPM panelists tuned to your station during local news.
Monday, 7:10am – 7 PPM panelists tuned to your station for the “A” segment of Morning Edition.
And on Tuesday those numbers are reversed with 7 PPM panelists tuned to your station at 7:04am and 10 PPM panelists tuned in at 7:10am.
What does it mean? Do 30% fewer listeners like Carl Kasell on Tuesday than on Monday? What if Carl was off on Monday but back on Tuesday? Do listeners like local underwriting better than the “A” segment of Morning Edition?
It’s hard to tell because PPM is capturing more than listening information. It is capturing lifestyle information too, though those details cannot be teased out into a report. Maybe a PPM panelist is on vacation one day and back in town the next. That could be a 10% jump in audience at the microscopic level.
We’re going to need a lot of experience with PPM to understand where it is showing us listener reactions to programming and where its results should be ignored. It doesn’t help that Arbitron is resisting allowing public radio to continue to use proprietary tools such as Listener PC and AudiGraphics to analyze PPM data. They would provide a valuable filter in the new PPM world.
That is the essence of the programming analysis challenge of PPM; how to filter out the noise created by all of this new data and get to information that can help public radio apply a better understanding of its audience to programming decisions. It’s going to take a long time.
Tomorrow: PPM and Pledge Drives