Tuesday, January 23, 2007

What Can I Do Today?

In the spirit of NPR's 7.8 Project, which has the goal of increasing the average number of weekly tune-in occasions to public radio to 7.8 from 6.8, this suggestion:

Find five hours of programming to improve in your schedule. Ask this question, "What can we do do fix those five hours so that audience Loyalty for those five hours equals the average Loyalty for the week?"

Maybe it is better cross promotion. Maybe it is selecting stronger music. Perhaps it is improving the on-air performance of the host. Maybe you have to change programs.

There must be five hours on your schedule that can be fixed in such a way that your current audience will tune-in more. Every station has such an opportunity.

Find them today.

Then fix them in the next few weeks.

Friday, January 05, 2007

RadioSutton 2010

Today is my 20th anniversary in public radio and the 10th anniversary of launching my public radio consulting business. No, there are no big announcements from me today, just a few stories and a big thank you.

I started at NPR on January 5, 1987 after almost 10 years in commercial radio. NPR Programming VP Jay Kernis was among the people who interviewed me for my first job at NPR.

Jay was producer of Weekend Edition Saturday at the time. He looked at my resume, which included carefully selected ratings successes I achieved as a commercial radio PD, and said, “We don’t really pay much attention to this stuff, but we probably should.”

Coming from commercial radio and paying attention to ratings made some folks in the NPR newsroom a bit wary of me. I wasn’t at NPR very long when I started taking some heat for wearing my Brooks Brothers suits to work everyday. I was actually chastised, and not always in good nature, for being a conformist. I didn’t understand that one since just about everybody around me at the time looked like they shopped at Banana Republic.

Fifteen years later, while wearing some really sharp Banana Republic khakis with my Brooks Brothers button-down shirt (open collar, no tie), a newly-arrived underwriting executive at a client station, just off the commercial broadcasting boat himself, declared the station no longer had use for my services as I was one of public radio’s old Birkenstock crowd. It didn’t seem to matter that I was really wearing Rockports.

Happily, those two fashion-profiling incidents were nothing more than humorous blips in what have been a great twenty years. I’ve been fortunate to work in just about every area of public radio nationally and locally; listener fundraising, underwriting, research, promotion, programming, news, the web, engineering, and planning and policy. I have to tell you, there are a lot of terrific people in public radio. It’s why listeners pay attention, why they give money, and why they stay with public radio for decades.

In that regard, I’m just like a listener. Thanks for so many good years. I’m really looking forward to this one.

Thursday, January 04, 2007

PPM and Pledge Drives

Arbitron’s Portable People Meter (PPM) should provide important insights on how listening changes, if at all, during on-air pledge drives. As I suggested in the PPM and Program Analysis post, the big picture analyses will be more important than the microscopic analyses. Here’s why.

Having looked at the currently available PPM analysis tools, it is clear that PPM data should not be thought of as comparable to auditorium dial testing. In auditorium dial testing, survey participants are instructed to listen to programming samples and immediately push buttons on a keypad to indicate whether or not they like what they hear.

People don’t always push the button in the real world. Sometimes they mentally tune-out programming they don’t like. Sometimes they wait it out. Clock radios go off at different times of the morning, sometimes during a compelling “D” segment of Morning Edition and sometimes during pledge breaks that pre-empt the “E” segment at 10 minutes before the hour. People hearing a pledge break might leave in the middle of it because that’s when they always leave the house. It will be wrong to assume that all PPM tune-ins and tune-outs are purely a vote for the programming of the moment. The meter isn’t that sensitive of a research tool.

PPM will be a valuable tool for examining whether or not a station’s weekly Cume and AQH are affected by pledge drives. There is a good chance that pledge-induced fluctuations in daily Cume will show up in PPM. We might get indications about the effects of pledge drive length on listening patterns.

One idea that comes into play is that listeners’ tolerance for pledge drives goes down as the drive progresses. A certain amount of fundraising might be acceptable before listening behaviors are affected. We should also be able to measure whether there are lingering effects from pledge drives. How long does it take to recover from any audience loss due to pledge drives?

The answers to these larger questions can help public radio stations raise more money while further minimizing any negative effects pledge drives have on listening. These are the types of questions PPM will answer, but only after we’ve had a chance to look at and understand the data from several markets. That’s a few years down the road. In the meantime, it will be important to resist the temptation to over-analyze early PPM results, especially those on the microscopic level.

Labels: , , ,

Wednesday, January 03, 2007

PPM and Programming Analysis

Arbitron’s new Portable People Meter (PPM) has the potential to significantly advance how radio program directors use Arbitron data to analyze audience response to programming and promotion. A sliver of that potential is being revealed as Arbitron rolls PPM out for commercial use. It also looks like it could take a few years for researchers and programmers to fully understand what all these new data mean for public radio, especially since PPM is not rolling out all at once.

One of the first PPM findings is that most stations have more Cume audience than reported by the diaries. Arbitron’s VP for PPM Sales, John Snyder, gave an excellent presentation on this topic at Arbitron’s recent consultant fly-in.

Some of the additional Cume audience comes from actual listeners not previously captured by the diaries. But Snyder also demonstrated how some of the additional Cume is made up of people who are merely temporarily “exposed” to a station. For some stations, this “exposed Cume” is made up of a large number of people outside of the station’s target audience. That makes sense if you think about all of the radio stations you are exposed to when shopping, getting a haircut, and the like. In this regard, PPM is adding “noise” to the programming analysis process. These exposed listeners aren’t tuned-in by choice. Knowing how to filter out that noise will become an essential skill.

Perhaps the most interesting aspect of Snyder’s analysis came when looking at Core listeners. These listeners spend more time with your station than any other. Previously, that definition was limited to a week of listening. Snyder demonstrated how some stations gather almost all their Core listeners over the course of a week while other stations gather their Core over the course of a month. Fringe by week, Core by month. This is an important idea, one that we’ve touched on before at radiosutton; PPM is offering insights about how listeners use stations over time. That’s important to public radio because the long-term relationship with listeners is what drives giving.

Most of the initial PPM programming analyses are not going to focus on the long-term listening patterns of your audience. Several times during the fly-in, key Arbitron representatives spoke of analyzing smaller, more short-term, chunks of data. We were told ad agencies are interested in “commercial-level” ratings, meaning that advertisers want to know the audience for specific commercials. Arbitron President Steve Morris thinks minute-by-minute analyses of programming using PPM is key to radio’s future.

This push to drill down for a microscopic view, rather than step back and get a bigger picture of station’s overall performance and service, will actually cloud our understanding of audiences before it clarifies it. No one will know what changes in numbers actually mean until repeating patterns emerge in the data. Here’s an example of what you might see when analyzing minute-by-minute data for Morning Edition in a market the size of Houston, one of the PPM test markets.


Monday, 7:04am – 10 PPM panelists tuned to your station as the first NPR newscast segment ends.
Monday, 7:05am – 11 PPM panelists tuned to your station during local underwriting credits.
Monday, 7:06am – 11 PPM panelists tuned to your station during local news.
Monday, 7:08am – 9 PPM panelists tuned to your station during local news.
Monday, 7:10am – 7 PPM panelists tuned to your station for the “A” segment of Morning Edition.

And on Tuesday those numbers are reversed with 7 PPM panelists tuned to your station at 7:04am and 10 PPM panelists tuned in at 7:10am.

What does it mean? Do 30% fewer listeners like Carl Kasell on Tuesday than on Monday? What if Carl was off on Monday but back on Tuesday? Do listeners like local underwriting better than the “A” segment of Morning Edition?

It’s hard to tell because PPM is capturing more than listening information. It is capturing lifestyle information too, though those details cannot be teased out into a report. Maybe a PPM panelist is on vacation one day and back in town the next. That could be a 10% jump in audience at the microscopic level.

We’re going to need a lot of experience with PPM to understand where it is showing us listener reactions to programming and where its results should be ignored. It doesn’t help that Arbitron is resisting allowing public radio to continue to use proprietary tools such as Listener PC and AudiGraphics to analyze PPM data. They would provide a valuable filter in the new PPM world.

That is the essence of the programming analysis challenge of PPM; how to filter out the noise created by all of this new data and get to information that can help public radio apply a better understanding of its audience to programming decisions. It’s going to take a long time.

Tomorrow: PPM and Pledge Drives

Tuesday, January 02, 2007

PPM Update

The Portable People Meter (PPM) was the main subject of Arbitron’s December consultant fly-in. The headline from that meeting was the PPM rollout schedule. That schedule includes the “PPM Currency Dates,” which is when your diary data can no longer be used for underwriting sales.

There were several PPM items worthy of note. Today’s post focuses on some of the issues that will affect many public radio stations. Tomorrow’s post will focus on programming analysis in a PPM world.

In-the-Book: Public radio listening will be reported with commercial radio listening. This is one of the reasons PPM will be more expensive than the current Arbitron service. It also means no more flying under the radar for public radio; a point that was made several times during the meeting. PPM should help generate more underwriting sales at public radio stations in Metros.

Metro Only: PPM will measure Metro geographies only. Diaries will still be used in the TSA and DMA. Stations outside of Metros should still encode their signals for the PPM measurement. Stations with a significant number of listeners outside of the Metro are going to be at a disadvantage in a PPM world.

PPM Versus Diary: To sell commercial stations and advertisers on PPM, Arbitron has to point out the weaknesses of the diary, even though thousands of stations will remain dependent on diary data. It seems inevitable that those stations will be perceived as having less valuable data than those with PPM. In addition to the qualitative differences, the immediacy of ratings data will be an issue. There will be 13 PPM books per year. There will be two TSA/DMA books per year. Stations with significant audiences, but not in the Metro, will not be able to deliver the audience data agencies and advertisers want when they want it.

PPM Plus Diaries: TSA, DMA, statewide, regional, and national audience estimates are headed to a mixed-methodology world. Arbitron’s current plan is to simply add Metro/PPM audience estimates to non-Metro/diary estimates for any geography bigger than a Metro. The sales currency of these estimates will be limited. Based on the tools I've seen to-date, it’s hard to imagine the hybrid PPM/Diary estimates will have much value for programming analysis. Hopefully, we can change that, at least for public radio.

The Care and Feeding of Your PPM Encoder: The PPM measurement system depends on your station embedding an Arbitron PPM code in your signal. The good news, Arbitron provides the encoder. After that, it’s up to you to make sure that it is always working properly. As far as Arbitron is concerned, your transmitter is off if your signal isn’t encoded. You won’t be measured. Arbitron is working on technologies that ID your station even if it is not encoded, but they made it clear that there are no adjustments made in the ratings for stations that don’t put out a properly encoded signal. Put another way, engineers have just become more important to your station's Arbitron success.

Labels: , ,