What's In a Number? (update)
You can read Current's article here.
If you are linking to radiosutton from the Current site, my original post on this topic can be found here.
I will also use this opportunity to once again call for greater accountability in the creation and publication of audience estimates for radio networks and their programs.
Individual radio stations aren't allowed to make up and publish their own unverified audience estimates. Allowing radio networks to control, selectively release, and sometimes hide the information they use to calculate their national audience numbers is an unacceptable business practice. It's bad for all of radio, not just public radio.
2 Comments:
I've always kinda thought that radio's dirty little secret is that Arbitron numbers are all a load of B.S. to begin with anyways. The diary method is so rife with potential for error and abuse as to be practically meaningless, isn't it? If NPR is "cooking the books", isn't that really just "reheating the leftovers"?
I would think what's more relevant is to pick a certain method of interpreting the results and to stick with it. That way you're more likely track gains/losses, which admittedly are somewhat less useful to advertisers but definitely very useful to programmers. For example, to know that a given show has lost 5% of its audience is valuable data, even if you're not really all that sure how much audience it had to begin with.
Obviously this concept falls apart when you're talking smaller audience numbers to begin with. But for the big shows it would seem an acceptable compromise.
Alternatively, getting the damn Portable People Meter's out and in play would probably be a much better solution to this entire problem; providing much firmer results rather than arguing over how squishy data is "interpreted".
Actually Arbitron's diary data are incredibly consistent, even in smaller markets. Yes, PPM will be a significant improvement for local data but it will create significant problems for national audience estimates because of different methodologies (diary and PPM) in different markets.
But what's not consistent from survey-to-survey is Cume duplication among stations. If it were, the software programs could account for it all and remove it. So trending duplicated Cumes doesn't work.
And such conversations are a distraction from the larger issue -- listeners are being double counted. Public radio is claiming listeners that aren't there. It's an interesting question. If a journalist reports on an imaginary person, it is a violation of the public trust. If public radio reports an imaginary listener, it is not?
Post a Comment
<< Home