We always hear about how Google doesn’t like duplicate content, and will penalize a page that has the same content as another. There are plenty of articles on optimizing sites to avoid having duplicate content internally, and articles ranting about scrapers.
What I want to know is what Google thinks about duplicate content cases such as Reference.com or the Associated Press.
Head over to Reference.com, the encyclopedia branch of the Ask.com network of reference sites. Enter a search term. Now go over to Wikipedia and enter the same search term. They’re the same! Reference.com is pulling Wikipedia articles onto their site and throwing in a few ads. (How are they doing this? Does Wikipedia have some sort of API?) What does Google think of this?
Continue reading →


People like to measure and compare things. Metrics affect decisions, like whether someone will buy an ad on your site.
Okay, you start out on an overview page like the “Top Content” page. Going with the example I mentioned (the Top Content) you then click on one of the entries. Once you land on the stats page for the individual blog post, take a look at the little dropdown menu marked “Segment.”








