Report by Jane Perry

Take me to your reader

In February 1981 a group of media researchers met in New Orleans to talk about readership research.

In November 1999 they met again in Florence. Only a few of the original delegates still attend, but they have been joined over the years by practically everyone else in the world with any serious interest in the subject. It is the best forum in the world for readership research, though also the only one.

Records were broken in Florence. There were more delegates attending (240) from more countries (30) than ever before. There were delegates from Slovenia, China and Mexico for the first time. There may have been more speakers and more papers; the strict quarter-hour time limit allowed a taste of many views, even if it rendered some papers incomprehensible.

The diversity of delegates was not matched by the speakers. Two-thirds of papers were Anglo-Saxon, and all but four of these were English or American. The UK is an important centre for readership research, but it cannot be more important than the whole of the rest of Europe put together. Perhaps the sponsors could actively seek papers, rather than rely only on those submitted.

The main value of the symposium is to generate new ideas based on shared experiences from different countries (Maria-Teresa Crisci) . In 1999, the most important theme was the response rate, coming to the front for the first time, as readership methodology slid into the background.

The battle between Recent Reading and Through-The-Book has dominated every previous symposium. Recent Reading may be the worst measurement system in the world but, like democracy, it is still better than all the others, as Michael Brown pointed out.This year it has finally achieved de facto status as the standard for readership research world- wide. This new status has not diminished the criticism. Some is familiar, such as Neil Shepherd-Smith’s elegant dissection by mathematical probability, enthusiastically endorsed by Ron Carpenter (1).

(There is a counter-argument. But of those who ever understood it, some are dead, some have gone mad, and Pym Cornish has forgotten.)

Some criticism is new, or from unexpected sources. Michael Brown made a thorough analysis of the possible causes of the well-known telescoping effect, in which Average Issue Readership claims are affected by the recency scale used. He regretted the lack of any systematic examination of this phenomenon, which has been known since 1971. It has a major and variable effect on all readership figures produced by the Recent Reading method, but is ignored in practice by practitioners, and rarely even acknowledged in theory.

Response rates: a growing problem

The problems of response rate have also been known for many years. This year they were given the time and attention they deserve, spread over two half-days.

Response rates have been declining worldwide for many years.The response rate on the UK National Readership Survey (NRS) is barely above 60%, and below 50% in the London area; the US census expects only 60%, and US newspaper studies are commonly under 40% (MRC accredited Simmons with 15%); the Dutch have not carried out a census since 1971 , because of poor response. Some speakers were unconcerned. Ashok Sethi maintained that substitution produced no evidence of bias in the results for the Indian NRS, and was much cheaper than trying to maximise true response.

Ivor Thompson also suggested that there was little benefit in chasing the final few percentage points of response; audience profiles did not change, and there was a serious possibility of being accused of harassment and intrusion.

Most speakers, and delegates, believe that the response rate is a problem for readership research (although not for most users of other kinds of market research, as Ivor Thompson pointed out) . There were several suggestions as to why it is decreasing, with ‘sugging’ (selling under the guise of market research) and ‘fugging’ (flind-raising …) top of the list of likely candidates, along with the growth of entrance security systems and mobile phones.

Most interesting was the analysis of reasons for non-response, and the implications for readership. Several speakers identified refusers as different in kind from respondents; generally they were less self-confident, less sociable, and less inclined to indulge in similar activities, filling in guarantee cards, or responding to direct mail. Where readership information was available, they were generally lighter readers than those interviewed.

In contrast, people who were nonresponders by reason of non-contact were almost exactly the opposite; younger, more upmarket, more mobile, and heavier readers.

This suggests that strategies for improving response rates must be handled sensitively; encouraging refusers to co-operate by offering differential incentives, for example, or selecting friendlier interviewers, might result in lower readership levels. Ivor Thompson and Costa Tchaoussoglou both reported experiences of this kind. Extending fieldwork periods would tend to increase readership, particularly for younger, upmarket titles, by bringing in more people who would otherwise have been absent from the sample. These biases might be further corrected by differential weighting, attribution, or any of the other devices used to correct for deficiencies in the achieved sample.

The implications of low response rates go beyond mere technical considerations. Dan Julevich and Jayne Spittler considered some of the auditing issues. There was a lively discussion about the possible legal and financial ramifications if a publisher disagreed with his reported figures. As Steve Douglas commented, it’s getting pretty scary.

Several speakers identified refusers as different in kind from respondents; generally they were less self-confident, less sociable.

Modelling

Another strong session dealt with modelling. Modelling is an integral part of print measurement.There cannot be any reach and frequency figures for readership derived from sample surveys without some kind of modelling. Modelling is also an integral element of the readership symposia. But the subject can be incomprehensible and sterile, even for the avid technicians who usually attend. This year, interest and attendance were high. The problems of low response rates, respondent overload and inadequate readership techniques can be alleviated, if not always entirely solved, with the help of modelling.

TV is another major influence on print modelling. TV has always had a dominating and one-way influence on print research, both in theory and practice. This year the American adoption of TV optimisation models was the driving element. This has taken the form of re-kindling interest in models of all kinds, particularly optimisers, and in mixed-media accumulation models.

As a result of both these factors, modelling is probably more important for everyone concerned with readership research than ever before.

There were several useful papers, offering interesting (and even new) solutions to some of these problems. One of the simplest and clearest was given by Krisuan Arnaa and Peter Mortensen. They suggested that the use of CHAID analysis might be a better way to estimate the readership of newspaper sections than over-burdening the respondent with lengthy, boring and sometimes unanswerable direct questions. Several of the UK delegates were observed scribbling furiously.

Martin Frankel easily won the prize for the highest number of formulae in a presentation, comfortably outpacing Gilles Santini, who is usually the clear favourite in this category. But, as always with Frankel, there was no unnecessary complication for its own sake. The formulae were introduced solely for the purpose of evaluation, and to demonstrate that the final choice of the most efficient curve for estimating audience accumulation was the most appropriate. With current computing power this is a feasible and worthwhile task, and I look forward to further instalments in the future.

Steve Harris and James Collins gave an interesting paper on optimisation, proposing that a Genetic Algorithm (GA) approach would produce more actionable and realistic results than the traditional hill-climbing method. While some of their comments were debatable (the reason why TV optimisers are used more than print has nothing to do with the deficiencies of the existing print models, and their choice of a test hill climbing model was not optimal), their overall approach was stimulating. The main problem with print optimisers is not the absolute level of reach they achieve, but the general shape of the proposed schedule. Anything that leads to more realistic solutions is welcome.

Renewed interest in the mechanics of print spilled over into the session on case histories and successful sales stories. This session, devoted to helping publishers sell the medium, is not, strictly speaking, within the remit of the symposium. But it is probably the most popular subject for many delegates, and it would be churlish to criticise its presence.

Julian Baim and Martin Frankel reported the first, and eagerly anticipated, results of MRI’s massive diary study of audience accumulation. Helen Johnston, and Veronique Debeer and Stef Peeters also considered other aspects of print audience accumulation over time.

The Internet: threat or ally?

In contrast to some of the earlier sessions, those on the future, and the interaction between print and the Internet, were surprisingly reassuring, and consequently a little dull. There is nothing like a good irreconcilable technical argument between two experts to liven things up.

Paul Haupt tried to frighten the room with a future in which a high proportion of the population would be functionally illiterate. Most of the delegates remained resolutely unfrightened. Denise Gardiner reported that newspaper readership is essentially a rite of passage into adulthood. Consequently, low readership levels among young people do not necessarily presage the end of newspaper readership as we know it. She also asserted, controversially, that there is a positive correlation between Internet use and newspaper readership.

Online surfers are not all pointy-headed nerds in anoraks; they are attractive, responsible.

Anita Hague, Scott McDonald, Liz McMahon, Johannes Schneller and Jane Bailey all rushed to her assistance. They too had found positive correlations between the use of the Internet and print. Online surfers are not all pointy-headed nerds in anoraks; they are attractive, responsible, upmarket magazine readers. No doubt they are also enthusiastic respondents to print surveys. If anyone is losing out to the Internet, it is more likely to be TV, or possibly the users themselves, cutting back on their sleep to spend more time online. Magazines were likely to benefit overall from the new medium.

Scott McDonald described the low level of substitutability between print and the Internet, which were performing quite different – and complementary functions for the user. Each could therefore help in building branding for the other. Tanya Deniz suggested that the Internet would help build media-neutral brands, by acting as a catalyst for media convergence. And no one even mentioned the weight of Internet ad spend going into conventional media to build online brands.

Overall it was a much better symposium than many. There were some new ideas, as well as confirmation and reworking of old ones. But the conclusions of the experts, particularly the Europeans, were surprisingly downbeat. This was attributed to a generation gap, which worried many of the older speakers. All the traditional craft skills of readership research begin to seem outdated in the interactive world of the new millennium. The vast improvements in modelling may make research itself redundant. The deficiencies of existing practices have been exposed much more clearly than the dangers of what may replace them.

Costa Tchaoussoglou was concerned that the young newcomers who are coming up with most of the innovations seem unaware of what has been learnt painfully over the last 20 years.

Erhard Meier was in a philosophical mood, which he attributed to an information overload. He warned that delegates should be as sceptical of the criticism of Recent Reading, as of the technique itself.

Michael Brown was in broad agreement. Placing himself firmly in the old, conservative camp, he welcomed new contributions, while worrying about the lack of technical knowledge which led to an inability to distinguish good data from bad. He urged delegates to take seriously their responsibility to justify and demonstrate the fairness of what they were doing. We must constantly recognise that we are in the business of estimation, not measurement. We must stick to the challenge of doing this as best we can, sharpening our existing tools, and grasping and using the new ones as they emerge.

The Atlantic gap

There were other gaps. That between Europe and America, or possibly between America and the rest of the world, was as wide as ever. Much of the American agenda was incomprehensible to Europeans; some Americans felt bemused by internal UK quarrels. All Europeans are aware of the difference in TV culture between the US and themselves. The division in print is just as wide, although frequently obscured by close personal friendships.

We had our fair share of new buzzwords. Bayesian probability has replaced neural networks as the favourite catch-all for fuzzy logic in probability models. Sheila Byfield came up with the glorious premature extrapolation’, for the indecent optimism of many Internet predictions.

Over all the conclusions hung the cloud of cost. Most delegates had been surprised by the revelation of the poor state of the response rate. This could be improved by higher expenditure, as Roger Pratt pointed out. But more money will not be available. As Roy Morgan said, advertisers and agencies will not spend more. They believe the current readership figures, and do not understand, or want to understand, all the technical problems. When the truth is known, it is the researchers who will be blamed.

There was also more than a whiff of heresy in the air. The problems of low response rates for probability sampling suggested a re-consideration of the possibilities of quota. The problems of Recent Reading might imply a return to frequency, with careful modelling of probabilities from a separate survey. With uncertainty among many of the wisest researchers about the best way forward, there are greater possibilities for new thinking than for many years.

Dick Dodson said that past suggestions should be tracked and documented, to prove the value of these symposia. This seems an excellent idea. The structure of the sessions could also be profitably tightened. Too many speakers started their papers with the same – a review of their subject (and ran out of time before presenting their key, final charts). Michael Brown suggested that there should be a professional overview of each main topic before the individual papers, and this also would be a major improvement. Erhard Meier’s summary of research practices, and his excellent accompanying book, shows how valuable this could be.

Finally, the awards should also be reconsidered. Too many speakers wasted valuable time with a fancy introduction, aiming for the prize for most entertaining paper. Perhaps this could be replaced, or augmented, with a prize for most useful paper?

Jane PerryJane Perry is research director for The Media Edge Europe, Young & Rubicam’s media department. Jane Perry, The Media Edge Europe, reviews the issues covered by the 1999