Digital Research – Online Measurement Too Many Numbers – Too Many Different Relationships

 Dick Bennett, ImServices Ltd

Steve Douglas, DJG Marketing

Bruce Rogers, Forbes

Gerard Broussard, Pre-Meditated Media

Worldwide Readership Research Symposium Valencia 2009 Session 2.5

Situation

Online measurement in the U.S. and its application for marketing and advertising is extraordinarily complex. The explosion of available data sources is sometimes viewed as a blessing but is more often seen as a challenging cornucopia of options for publishers, advertisers, agencies, researchers, investors, etc. to sift through. There is wide range of audience metrics and response data that comprise the internet measurement landscape with lots of crossover and duplicity among the sources. Not surprisingly, the same metric captured across all can vary (sometimes widely) according to different methodologies deployed for data collection and reporting.

Purpose

The goal of this paper is to provide insight into the nature and level of complexity of measurement and reporting challenges in the U.S. and make recommendations for improvement. The content will include:

The spectrum of measurement opportunities, and their marketplace applications. The online ad sales challenges facing agencies and advertisers

The evolution of standards for web measurement and the contributions of IAB and MRC

A perspective from a leading publisher, Forbes.com on what they use and why in this complex and fragmented measurement environment

Spectrum of measurement opportunities

A simple framework for evaluating measurement opportunities in the U.S. (and perhaps elsewhere), is to view them within two dimensions: 1) their underlying methodologies for data collection and reporting and 2) how these techniques dovetail with their application in the advertising/media marketplace. So, the strengths and weaknesses of their methodologies make them more or less suitable for executing specific tasks in the advertising/media process. For example, panel-based companies dominate the supply of target demographic data and reach estimates while server-based sources are the foundation for ad currency, direct response metrics and website vitality.

Historically, panel-based data have been limited by sample size for reporting small sites and targets but recently, web-crawler based direct measurement (e.g., Quantcast) has emerged as another option to fill these holes and provide insight on the long tail of the internet. While panel-based data are currently not used as the currency of the internet planning or buying, direct measurement holds potential promise for a “one-size-fits all” solution for valuing web impressions; however, the debate is just beginning. And it seems likely that the industry will operate in a multi-source world for some time to come as new advances in technology have the potential to impact measurement and reporting. In the meantime, the relationship between research sources and tasks is roughly captured in Chart 1.

U.S. Audience Measurement Opportunities

Panel-based systems provide qualitative metrics like demographics and target reach while census- based ad-serving solutions are the currency for buying and post evaluation.

Panels/Survey

3rd Party/ Publisher

Media Planning/ Placement Audience Measurement

  • Target Reach
  • Nielsen NetRatings
  • Comscore
 
  • Target Demographic
  • @Plan
 
  • Transactions
  • Quantcast
   
  • Compete
 
  • lmpression Valuation
  • Clicks
  • Behavioral/Contextual Targeting
  • Atlas
  • DoubleClick
  • Website
Post Evaluation Audience Delivery Clicks
  • Atlas
  • DoubleClick
  • Website
  Website vitality Conversions Sales
  • Webtrends
  • Compete
  • Omniture
  • Webtrends
  • Atlas
  • DoubleClick
Page 1 Branding lift
  • DynamicLogic
  • lnsight Express
 

In the U.S. Nielsen and comScore are the dominant sources used during the planning stages of the advertising media process. They are deployed primarily for identifying lists of sites that deliver specific advertising targets according to demography and audience size. But other sources like @Plan, Quantcast and Compete provide adjunct estimates and reported metrics vary greatly from vendor to vendor. Below (Table 1) is an example of Forbes.com Unique Visitor (audience) data and Forbes’ competitive set from Nielsen/Net View, comScore, Compete and Quantcast. No surprise to this ……………

Table 1 Forbes.com It’ Competitive Set

Measurements of Audience from Various Syndicated Research Companies May 2009

  Nielsen/Net

View June 2009

ComScore June 2009 Compete June 2009 Quantcast June 2009 comscore vs. compete net view

vs compete

comscore

vs. net view

Quantcast

vs. compete

  uv uv uv uv % Diff % Diff % Diff % Diff
universe 191,035,000 193,896,000 184,645,048 212,000,000 3% 5% 1% 15%
Forbes.com 7,855,000 5,046,000 11,258,992 4,200,000 -55.2 -30.2 -35.8 -62.7
                 
Marketwatch 2,477,000 1,516,000 4,956,882 1,450,000 -69.4 -50.0 -38.8 -70.7
CnnMoney 7,359,000 4,421,000 7,791,325 10,500,000 -43.3 -5.5 -39.9 34.8
TheStreet 1,758,000 1,343,000 3,480,835 1,900,000 -61.4 -49.5 -23.6 -45.4
BusinessWeek 3,737,000 2,856,000 5,186,371 4,400,000 -44.9 -27.9 -23.6 -15.2
WSJ 7,485,000 3,972,000 10,706,263 3,500,000 -62.9 -30.1 -46.9 -67.3
Smartmoney 1,097,000 525,000 933,016 1,200,000 -43.7 17.6 -52.1 28.6
Bloomberg 3,881,000 2,289,000 5,519,342 4,100,000 -58.5 -29.7 -41.0 -25.7
Economist 696,000 223,000 588,016 1,100,000 -62.1 18.4 -68.0 87.1

Omniture, another service that measures unique visitors, page views and other metrics using log file techniques, produces even higher numbers 14,000,000 in May 2009. It is very challenging to determine which metrics to use when a publisher is selling print and web advertising availability

Following is an overview of the key U.S. audience measurement panels (Net Ratings, comScore and Compete), universe measurement tools which require tagging (e.g. Quantcast,) and a brief review of the other types of measurement (Omniture and Google analytics). We will also review the impact of fusions between print currencies and web metrics and their potential for the future (both panel and tag based products). A more detailed description of all these research sources can be found in the Appendix.

Nielsen Netviews:

Sample – 29,000+ 25,000 home, 2,700 work (check numbers) Data collection – Passive PC metering technology tracks usage Recruitment: RDD

Primary Use: Audience demographics, target reach estimates Comments: Weak business site sample

comScore:

Sample – 120,000 50,000 home, 50,000 work, 20,000 university Data collection – Passive PC metering technology

Recruitment: RDD

Primary Use: Audience demographics, target reach estimates, transactional behavior Comments: Weak business site sample

MRI/Nielsen Fusion:

Sample: 25,000 + Adults over 18

Data collection: In Person interview utilizing recall Recruitment: Random Probability sample

Primary Use: Intermedia Planning particularly web print planning Comments: Favors webs sites with hooks in both Netviews and MRI

@Plan

Sample: 9,000

Data collection: Telephone study Recruitment: RDD

Primary Use: Media planning for Web sites. Comments: Many Planners do not like recall data

Compete

Sample: 2,000,000

Data collection: Online surrey

Recruitment: Panel recruited from 15+ sources and Normalizes the data

Primary Use: To obtain more information that profiles their sites and competitors sites. Comments: Limited use by the agencies

Quantcast

Sample: Census

Data collection: 925,000 cookies on 90,000 publishers to capture the site specific web usage by 200B users Recruitment: Users are tagged as they go to a web site

Primary Use: For addressing advertising

Comments: Suffers from cookie rejection and deleters.

In the US, ad impressions are the online currency as measured by third party servers – primarily DoubleClick and Atlas. Buyers only pay for the ad impressions that are reported by the third party servers. The Media Ratings Council does an annual certification of the DoubleClick and Atlas ad server measurements. (The systems of many other publishers and third party ad serving companies have been certified by the MRC and other auditors.) One of the most perplexing problems is that a DoubleClick ad impression for the client may not agree with the ad impression count of the seller. So we are faced with the problem that similar certified ad server software can produce different results.

The online ad sales challenges facing agencies and advertisers

In late 2007 and early 2008 DJG conducted numerous interviews with a cross section of top media executives at Universal McCann, PHD, Initiative, OMD, Horizon, Media Smith, Starcom and others. We asked what their biggest internal challenges were, how they evaluate and buy online media, what they would like to see more or less of.

We see that agencies have their own issues. They have traditional media vs. interactive media silos. Initially there has been an extremely slow process for making cross media buys. The tide seems to be changing as agencies work to address this in the future. An example is that Carat’s new CEO is a former interactive agency head.

We also observed agencies are faced with rapid staff growth and high turnover in interactive. This presents concerns about training and continuity as major issues. Online buying is very labor intensive, open to error and requires a high degree of touch despite technological advances. Buyers are plagued by too many sets of research data that come out too frequently. The Interactive Advertising Bureau wants agency ad serving systems and processes to be audited/certified along with certifications of publishers and third party servers. At this time no agencies have agreed to that audit.

Panel research is used as directional and buyers only pay on the data provided by certified ad servers like DoubleClick or Atlas. Post campaign reconciliation can be a nightmare.

Many agencies “off the record” confide that they considered Google/Doubleclick and Microsoft/aQuantive (Atlas) a threat.

Right now the agencies tell us online buying is as much an art as it is a science. They say they try to look at as much data as possible. Many online buyers do not have much respect for recall data provided by MRI or the panels, comScore Media Metrics and Nielsen @Plan/NetView, and most are not using the MRI fusion.

The Ipsos Mendelsohn Affluent Study has expanded its measurement of web sites. That data will be available September 2009. It will be 40+% response rates Direct Mail study of household heads with more than $100,000 household income. It will be interesting to see how that data ends up being used. For a complete review of the methodology go to their web site and refer to the Summary of Methods prepared by Erhard Meier. The reason this is even mentioned is that it will have a large number web sites that are companions to magazines, national newspapers and television shows directly so net and gross audiences can be calculated across media platforms such as ESPN

At this time, gut and prior success contribute heavily to the final buying decision. Many buyers are looking for options that are easier to buy. Search and ad networks can make it easier to buy ad impressions in bulk. Vertical web sites, especially smaller sites, are more time consuming to evaluate. Buyers are looking for unique targeted turnkey packages which publishers like Forbes.com and Time Inc. are trying to provide.

Evolution of standards for web measurement; IAB and MRC contributions Review of IAB Standards and how they can be Audited

The US Interactive Advertising Bureau (IAB) has produced “agreed upon” standards for audience, ad impression and click

measurement. These standards are all available on the IAB web site. Since IAB US is primarily an Ad-centric organization, advertising standards are the most mature in the US. The standards have been endorsed by both the publishing and advertising communities.

Ad Impression Measurement

The IAB produced measurement standards for banners, streaming, rich media and advertising displayed within rich media applications. There is also content on the IAB web site that details the ad serving process. That process is described in the table below. The IAB, MRC and other auditors note the potential for error in all three phases of the process.

Ad Serving Phases

Phase 1

Campaign Initiation and Entry

Phase 2

Processing the Campaign

Phase 3

Reporting on the Campaign

Order Entry   Reporting Controls
Trafficking Ad Delivery Disclosures
Inventory Management Ad Counting (filtration, etc.) Error Correction

Sources of Potential Error

Campaign initiation and entry – Problems are primarily related to the mechanical placing tags in incorrect positions, not executing order entry properly and trafficking the ad to the inventory management system. Also, when the inventory prediction system errors, traffickers are forced to start, stop and transfer campaigns; processes that lead to confusion and errors. The IAB recommends each ad agency be audited by a third party source. At this time, no agencies have publicly committed to an audit.

Processing the campaign– Audits have uncovered; a) the incorrect use of technology, b) poor robot filtration techniques, c) lack of adequate cache control and d) poor security, system monitoring and software development controls, all of which can lead to errors in measurement.

Reporting on the campaign – Audits have uncovered; a) lack of monitoring of campaigns, b) lack of procedures to review reports before release, c) uncontrolled restatement of reports without advertiser knowledge. Also, Atlas and DoubleClick have undergone extensive MRC accreditations, but there continue to be numerous reports where reports of campaigns showing different numbers to the buyers and the sellers (publishers). This seems to be less of a problem to sellers when there is a large amount of inventory available. From a publisher’s standpoint, it is easier to give a make-good than have an argument.

As more publishers and ad servers get audited and certified, the frequency and volume of discrepancies seems to be shrinking, but there are still many publishers, ad servers and agencies that remain to be reviewed. Resistance to audit is strong. We see that the certification process of ad systems appears to result with discrepancies lower than 10%, most less than 5%.

Click Measurement

This year, IAB introduced a standard which delineates how clicks should be measured. The standard details the various status’, and thus measuring points within the life of a click, which provide for the distinction between when a search marketer measures the “redirect” and an advertiser measures the “landing”. The standard also requires very intensive/robust filtration techniques to remove “invalid” clicks from measurements. Invalid clicks can fall into a few different categories, from duplicate clicks caused by double-clicking, to robotic click activity, to fraudulent click activity.

The IAB standard also provides for some additional tracking methods so that advertisers are able to perform reconciliation activities to connect and understand billed clicks to landings to conversions.

Click Fraud detection companies such as Click Forensics and Anchor Intelligence, provide click scoring services to give advertisers indications of which clicks are of higher quality and thus more likely to convert. Other auditing companies, like ImServices in the US, will review entire campaigns to surface indications that search marketers might be billing for invalid clicks, as well as determine that certain campaign tactics are converting more favorably.

To date, only four companies (Google, MSN, Yahoo and Business.com – probably 95% of the US search marketplace) have been certified to the IAB Click Guideline. We are aware of several other companies are considering, or preparing, for certifications.

Audience Measurement

For years, US web site publishers and buyers have complained about the quality of measurements being published by various syndicated measurement organizations. As earlier discussed in this paper, web research metrics were seen to be vastly different. But also, publishers were measuring audience via their web logs, and finding huge differences when compared to panel-based metrics.

Such discrepancies have been claimed to be attributable to deficiencies in the various methods employed e.g. panel-based vs. server. Interesting to note, we know of no endeavors to attempt detailed reconciliations between the various methods, which might lead to an understanding and quantification of various areas of discrepancy. (There was a study done 10+ years ago to understand the nature of the differences, but nothing conclusive.)

To address these concerns, in 2009, the US IAB introduced a standard for audience measurement, which is intended to be used as a basis for measurement by

  • Audience measurement companies employing research techniques
  • Census based audience measurement
    • Publishers and
    • Ad Serving companies

The standard provides several requirements for panel based researchers, some to validate the methods employed, and many to provide methodological disclosure to users of the data.

For census based measurers, the standard addresses many of the more challenging issues to this form of “log based” measurement. For instance,

  1. How can the measurement be adjusted for cookie deletion?
  2. How can the measurement be adjusted for people who reject cookies vs. people who have visited the site for the first time?
  3. How to adjust measurements for multiple people using the same computer?
  4. How to adjust measurements for a person using multiple computers?

We observe that the above factors can affect different publishers to differing degrees. We also feel that, based on the circumstances, the techniques used to resolve the challenges can be different for different publishers.

At the time of this writing, ComScore and Neilsen have submitted their systems for MRC audit. Some other companies are “investigating” audits. The audits are considered long term and continuing. We would hope that after completing the accreditation process, the metrics produced by these companies might approach each other. But this remains to be seen.

For census (log) based systems, we know of no companies that are undergoing certification (or reconciliation for that matter) of their measurement techniques.

Forbes.com’s preferences for data that is used for web print and web advertisers

Forbes joined its print and digital organization into one marketing and sales team in January of 2009. As marketers of Forbes branded content across all media platforms, the Forbes research team needs access to a wide variety of third-party data providers—from MRI and Mendelsohn for print to Nielsen’s suite of digital audience measurement services, along with proprietary services like Omniture to measure and parse our weblog information. Forbes is also a big believer in Compete for both competitive and diagnostic data in the online space.

Forbes.com strongly favors Compete because it has an active panel of more than 2,000,000 consumers in the US. The major advantage of Compete’s panel over other services is that it provides added granularity and accuracy for large, medium and emerging publishers; many of these site are either not measured, or are not measured accurately, by the smaller comScore and Nielsen Online panels. As discussed above, Compete combines a large and, representative proprietary metered-panel with licensed clickstream data into a single, unified panel for online measurement purposes. This multi-sourced approach offers the additional benefit of calibrating each component data source against the broader set, to provide an extra layer of normalization and quality to Compete’s audience metrics. Further, Compete is the only audience measurement company that is in partnership with Omniture, making it substantially easier to reconcile the frequently observed differences between census web analytic data and panel-based measurement. The accuracy of Compete’s approach will only strengthen when Compete introduces its own direct measurement capabilities in 2010. Finally, as the digital measurement arm of TNS and Kantar Media, Compete’s position in the media research industry will continue to grow, both in the US and globally. Compete has already begun to integrate its services with sister companies such as TNS Media Intelligence and Dynamic Logic, and international expansion is underway with clickstream enablement of the existing TNS and Kantar consumer panel outside of the US.

Alexa and Hitwise have undisclosed sample. It also has data that can differentiate between USA and International usage. Most importantly it obtains panelists from the largest variety of clickstream data sources. They normalize the data and provide people estimate using rigorous techniques.

Conclusions

Our main conclusion is that there are too many numbers and relationships. Planners will tend to use them directionally and sellers will with the one that puts them in the best light. Some of our other conclusions are:

  • Panel-based solutions are currently used for site selection and reach/frequency calculations; Nielsen and comScore have been trying to push the envelope on panels becoming the only source for planning and buying execution. Not likely to happen in the immediate future due to some unusual measurement issues with web:
    • Business use difficult to capture,
    • Limitations of sample size for small sites
    • Finite targets like teens because web is far more fragmented than TV or other media.
  • The methods offered by Quantcast and others hold promise but only as an adjunct to other sources; i.e. they can help estimate the sites that are unreported by Nielsen and comScore.
  • Because of the above, the immediate future and beyond, we’ll be living in a three-sourced world for internet audience estimates: Panel, server and direct.

This aforementioned thinking is not far-fetched. We’re starting to deal with the same issues in TV. Consider the detailed data from the STB’s (Set Top Box data much like web log server and site files) that will provide the level of granularity never seen before in the medium. Like the internet, this rich information will be deployed to tactically optimize the TV medium through better understanding of creative holding power, the pacing and timing of advertising and eventually the impact of microtargeting on sales.

This all raises a simple question. “Which source has the best unique user definition?” To answer this question the industry would have to create a “gold standard” for calculating a unique number, then map the sources against it to see which one comes closest.

Based on our experience in the print industry, following the 1983 Montreal Symposium, we could not do it for print. That situation in retrospect, disagreement between Through-the-Book and Recent Reading, seem much less complex than developing a “gold standard” for unique visitors or any other web metric.

The best hope we in the USA have is that the major internet research firms are being audited by Media Ratings council. They may be able develop some form of reconciliation of the data at best and at worst we will know the companies will be fully transparent and have done what they said they were doing.

Spectrum of measurement opportunities – Detail

APPENDIX

We have two kinds of data: Web Analytics and audience measured from panels and recall studies of the U.S. population. The Web Analytics are:

  • Web Analytics – site-centric server data
    • E.g. Omniture, Web Trends, Visual Sciences, Google Analytics. These systems provide unique visitor and page view metrics.
  • Hitwise – collects data from ISPs. The active monthly sample is undisclosed. It does distinguish US from international usage, and they attempt to normalize the data. They do not provide people estimates. They measure traffic and page views.
  • Alexa – collects information from users who have downloaded and installed Alexa toolbars. The active monthly sample is undisclosed and there is no normalization or people estimates. They measure traffic and page views.

Hybrid research (Combined Panel and Site Centric data) :

  • Quantcast – Focus is on reporting audience profiles (demographics, affinities and geographic skews). The methodology requires a publisher to tag their web site. Quantcast currently provides free demographic profiles and unique visitor counts. There longer term vision is about enabling advertising inventory and targeting.

Audience measurement products (discussed individually below):

  • NetRatings (@plan and NetView)
  • MRI (NetView/MRI fusion)
  • comScore
  • Compete
  • Quantcast

Nielsen Net Ratings @Plan and NetView@Plan

Nielsen Net Ratings @Plan USA model is a self administered survey using RDD sample recruitment. The 36,000 sample base and the 9,000 respondents are added quarterly. They claim to offer in depth demographic, extensive consumer and B2B purchase behavior data.

@Plan is a study of 9,000 people per quarter on an annual sample of 36,000. They use RDD to contact 18+ online users in the last 30 days. They give them survey URL to complete the survey online. Panelists are paid $5, $10 or $15 depending on income, for a five minute survey.

Every respondent is asked:

  • Demos
  • Online shopping
  • Online activity
  • Sites are asked in stages:
    • Step 1 is categories
    • Step 2 is site specific vetted by NetView on size. A site needs 500 to 750 uniques to qualify, thus satisfying the small web site demand.
    • Step 3 is Ecommerce

There is an A to Z single rotation. @plan claims they have performed rotation tests and rotations were seen to confuse respondents.

@Plan has classified sites as primary and secondary. There are three levels of branding:

  • Parent (e.g. CNN, Forbes)
  • Primary (Yahoo, CNN Digital)
  • Secondary (CNN, Forbes.com, Forbes Auto)

@Plan measurements are based on 12 months of survey responses. A 12 month average of net view research is employed to assure true “apples to apples” comparisons. The formula for @Plan research = @Plan site composition X NetView site reach

%(18+) X @Plan universe = total number of people who visited the site and are within the target.

The @Plan pluses are that the RDD recruitment is representative of total universe. It is the only independent source of in-depth demographic and consumer and B2B purchase behavior. The @Plan minuses are they use a self administered survey relying on respondents’ memory. In the US panel, three quarters of the respondent sample is within the panel at least six months. One quarter of the sample base is 12 months old.

Nielsen NetView USA

The Nielsen Net Ratings – Net View USA online panel consists of 29,000+ people recruited via RDD (3,700 at work) and 25,000+ at home. Net View uses PC metering technology to passively track actual panelist online and click stream behavior and offers some basic demographic information. (They are also producing a hybrid product and need to determine if the tagging will be on both Net View and Ad Plan)

Nielsen Net Ratings does not disclose the active monthly sample (Move this up defined as the number of unique panelists who have transmitted a click stream event in the last calendar month) Nielsen still does distinguish between international and US usage. The data is normalized and it does provide people estimates.

The NetView USA pluses are: a) The RDD recruitment insures the sample is random and representative of the total universe. b) The PC meter technology captures actual respondent behavior. The major negative of NetView is a small “at-work” sample base which results in under-representation of online business audiences. Also, most large businesses will not allow software on their machines.

By the end of this year the U.S. Net View panel will be combined with the Mega panel . This will increase the sample size to 200,000 providing the following benefits:

  • More reported sites in total
  • More sites with full reporting, demographics, web traffic, referrals, etc.
  • Greater granularity in site demographics
  • Less potential “period to period” volatility in volume like page views and time spent

Data quality is maintained using an RDD sample to calibrate the online panel, balancing the skews associated with online recruitment. Demographic balancing will be used to meet enumerative targets. Behavioral balancing will be used to account for heavier online usage. In the future Nielsen (and comScore) will be providing measurement of all internet based video viewing.

Nielsen combines panel and server data in a meaningful way using passive and active measurement techniques. Passively Nielsen collects and credits streaming video activity based on a stream of URL observations which do not require respondent participation.

The active measurement is obtained from publisher tagging of web content to collect highly accurate and comprehensive counts, providing granular detail on video consumption. This does require participation by web sites and broadcasters, since they are required to code the information themselves.

Nielsen is currently undergoing an Audit from the Media Ratings Council.

MRI/Nielsen fusion

Mediamark Research Inc. (National Media and Marketing Survey) and Nielsen//NetRatings (NetView) undertook a fusion of their respective media currency databases. The fusion of the two databases affords media researchers – planners, buyers, marketers, etc. – the opportunity to analyze the relationships among magazines (MRI), internet properties (Nielsen//NetRatings), other media such as television and radio and an extensive range of consumer behaviors (MRI) using a single data source.

Because of the complexity of the internet and related media consumption behaviors Dynamic Segmentation Fusion was developed, drawing on features previously employed in both static and runtime fusion techniques. This whole process has been described numerous times in papers and presentations given by Risa Becker and James Collins of Mediamark Research Inc. and Mainak Mazumdar from Nielsen Net Ratings. Fusion is that it is the currency of the internet. As of this writing, we all feel that the industry has such confusing data that there is no internet currency.

comScore

comScore Media Metrix USA recruits two million people worldwide. In the United States they use RDD to recruit a panel of 120,000 people who agree to accept comScore’s click stream tracking technology in their households. The panel is made up of 50,000 at work, 50,000 at home and 20,000 university. comScore uses a PC metering technology to catch actual online, clickstream behavior. It offers:

  • Actions (starts, stops, clicks, etc.)
  • Audience behaviors (shopping, commerce)

comScore deploys passive non-invasive measurement in its collection technologies; projects the data to a universe of persons online and continuously strives to identify, understand, quantify and eliminate bias to the extent possible. The following are the core steps of the comScore methodology:

  1. Establish the universe via enumeration
  2. Obtain respondents via online recruitment
  3. Collect data
  4. Identify the user
  5. Projection and bias elimination

For a complete methodology review please see the comScore Media Metrics web site.

comScore has also commenced tagging of web sites that will be discussed in a paper that is be given by Josh Chasin. Also, it should be noted that they are currently undergoing an Audit from the Media Ratings Council.

comScore pluses are they do an RDD recruitment of the US panel. The negative is under-representation of business traffic because most large businesses will not allow computer software on their machines.

Compete

Compete provides audience measurement, website traffic, search marketing and engagement metrics based on the daily browsing activity of over two million US users. Compete applies an innovative and rigorous normalization methodology, leveraging scientific multi-dimensional scaling (by age, income, gender and geography) to ensure metrics are representative of the US population.

Compete operates the largest observed behavioral and attitudinal consumer panel in the industry. Compete’s online panel is comprised of consumers who have provided permission to have their internet clickstream behaviors and survey responses analyzed to help companies improve the effectiveness of their marketing programs. Compete’s privacy policy requires that consumers opt-in to participate in its panels, and that all consumer data remains anonymous.

Compete sources its panelist data in two ways: from proprietary panels that Compete maintains, and from licensed clickstream partnerships from complementary third parties. Compete’s proprietary panelists are directly recruited and are invited to install Compete’s online meter software on their computers. In addition, Compete has developed clickstream-sharing partnerships with Internet Service Providers and Application Service Providers which provide additional granularity to Compete’s base of proprietary panelists. Compete’s privacy policy ensures the permission and anonymity of consumers who participate in our clickstream-sharing partnerships.

This “panelist multi-sourcing” approach is unique in the industry and provides multiple benefits. The first benefit of panelist multi-sourcing is that this approach enables Compete to develop a massive consumer panel (5-10 times larger than other online panels in the industry). The size of Compete’s panel provides granular insights on smaller websites, consumer segments and

infrequent behaviors. Granularity is an extremely important panel characteristic as content on the web, and consumer behavior itself, is becoming increasingly fragmented.

The second benefit of panelist multi-sourcing is that it enables Compete to develop a highly diverse and more representative panel. Compete combines approximately twenty different panel sources in order to accurately represent the diversity of the internet browser population, providing Compete with a unique ability to represent fragmented audiences.

The third benefit of panelist multi-sourcing is that it enables Compete’s bias mitigation and audience calibration system. In this system, Compete uses the different panel sources to isolate sample bias in any one source, triangulating specific metrics across each of the panels to generate a high quality, normalized metric for the integrated master panel. This triangulation and calibration process is unique in the online measurement industry.

Compete also provides audience estimates as you saw from the first charts, which make it competitive with comScore and Nielsen Net Ratings. The very large size of the Compete panel allows for analysis of very small sites in the United States.

Quantcast

Quantcast has an entirely different business model than any of the other research companies. They employ tagging of a very large number of web sites to provide a measurement for advertising inventory and behavioral targeting. This tagging technique provides a source of web site audience data at no cost. The tagging provides a form of addressability of message that is more consistent to targeted audiences and standardizes delivery. Quantcast plans to share in the percentage of increase in targeted ad revenue that the web sites can charge and advertiser.

Quantcast wishes to measure and organize media the way it is actually bought and sold. . They use a combination of census data and panels. As of this year, they are measuring over 90,000 tagged web sites.

The 90,000 publishers have over 10 million media assets – sites, blogs, videos, widgets, campaigns, etc. Quantcast measures over 4 billion new media consumption events everyday using in the U.S. 925,000 cookies to capture the 200 million+ people. This census level measurement offers “ground up” and “real time” views of addressable media activity “broad and niche” for global, regional and local affinities.

Quantcast is platform agnostic and tries to promote a transparency that is the same for both buyers and sellers. Their methodology has some major issues. Cookie deletion is a huge problem. Also Quantcast accuracy is dependent upon the combining panel and tagged data using some extremely complicated “Black Box” methods.