Apr 23, 2009 by Matt | Posted in General
Google has been rolling out changes to the way their referrer strings are structured. They are moving from a simple URL that shows the search query to a more complex one with some extra information that may be valuable.
Starting this week, you may start seeing a new referring URL format for visitors coming from Google search result pages. Up to now, the usual referrer for clicks on search results for the term “flowers”, for example, would be something like this:
Now you will start seeing some referrer strings that look like this:
Patrick Altoft of BlogStorm has noticed an interesting addition to the string. He thinks that the
cd=7 part stands for “click detail 7,” and is the ranking for your page. So if someone clicked through from Google to your site, your analytics software could collect the referrer string, and determine not just what the user searched for when they found your site, but what the page ranked!
This is valuable information for Search Engine Optimization, and makers of traffic statistics software, certainly.
Feb 16, 2009 by Matt | Posted in Coding
Finally, the solution for our duplicate content worries is over! Google now supports a new method to specify a canonical URL for your page. This “hint” suggests that Google use this page as the original, and ignore duplicates elsewhere on your domain.
You simply add the fully W3c-compliant <link> tag in your header, and have it point to the permalink for a given post. Google will most likely rank that page in their results, and ignore others. That should help out your ranking overall.
<link rel="canonical" src="http://www.example.org/your/permalink/page/" />
Obviously you’ll want some way to integrate this with your CSS. Some will want to roll their own solution, but if not, there are already prefab options available.
Feb 2, 2009 by Matt | Posted in WordPress
Did you know there is an easy way to dramatically improve your WordPress blog’s search rankings? Try installing the All in One SEO Pack, one of the most popular WordPress plugins.
- Prevents duplicate content issues
- Rewrites page titles to be more search engine-friendly.
- Generates dynamic meta tags for permalink pages. A short snippet from the article is used for the meta description, and the post’s tags are used for the keywords.
- Allows you to manually override the meta tags and title tag on a post-by-post basis as well.
These things can really help out your rankings. The <title> tag is a major point of focus for Google and other search engines, and it’s the part of a result that searchers pay the most attention to. It pays to have good titles.
Meta tags may not have the weight they once did, but they are still considered, and are worth paying attention to all the same.
As with all matters of SEO, your mileage will vary. This plugin will definitely do more good than harm though.
Oct 17, 2008 by Matt | Posted in General
Google recently added a useful new feature to their Webmaster Central portal, which Google employee Matt Cutts says can help you get some extra links. It allows you to see dead URLs that sites are linking to on your site (a.k.a. pages that don’t exist, and return 404 pages). Essentially, after getting a list of those URLs, you can set up some good (and search engine optimized) content at those URLs. Voila, extra links.
Let me back up and give you a little history. When someone comes to your site’s webserver and asks for a page that doesn’t exist, like http://www.mattcutts.com/asdfasdfasdf , most web servers are configured to return an HTTP status code of 404, which means that the page was “Not Found.” If someone links to a page on your site that doesn’t exist, most webservers give a pretty sucky experience: visitors usually land on a pretty useless page, and search engines might not give you full credit for those 404 errors.
Now Google’s webmaster portal lets you see who is linking to your 404 pages. Once you register your site, click on Diagnostics, then Web crawl, and select “Not found”.
Read Matt Cutts’ full post.
Also, BloggingTips.com has a more in-depth post on making use of the tool.
Oct 12, 2008 by Matt | Posted in General
We always hear about how Google doesn’t like duplicate content, and will penalize a page that has the same content as another. There are plenty of articles on optimizing sites to avoid having duplicate content internally, and articles ranting about scrapers.
What I want to know is what Google thinks about duplicate content cases such as Reference.com or the Associated Press.
Head over to Reference.com, the encyclopedia branch of the Ask.com network of reference sites. Enter a search term. Now go over to Wikipedia and enter the same search term. They’re the same! Reference.com is pulling Wikipedia articles onto their site and throwing in a few ads. (How are they doing this? Does Wikipedia have some sort of API?) What does Google think of this?
Continue reading →
Aug 24, 2008 by Matt | Posted in Design
O’reilly Press puts out some really good tech books. So I grabbed Andrew B. King’s Website Optimization when I saw it at the library a few days ago. It was pretty good, though not my favorite of their books. I enjoy the ‘Hacks series (PHP Hacks, Podcasting Hacks, etc) more, but they have some other good books too.
Website Optimization is worth a read if you’re trying to get more from your website. The book covers several aspects of optimization. Search engines, loading times, conversion rates, and a little bit on accessibility. There is a heavy emphasis on Search Engine Optimization of course.
It’s a pretty good book, and is very informative, but I have to disagree with some of the SEO advice. The book seems to promote the idea of being stingy when linking to external sites, in an effort to hoard PageRank, linking reciprocally, and making use of the nofollow attribute excessively. Then the book goes and tells you that blogs are a good way to get more inbound links.
I can tell you that an attitude like that regarding links will get you nowere fast. If you want to get links, you must give them first. Link to things that you think will be of interest to your users. The sites you link to will then learn about your site when they find some traffic coming in from your site. Nofollow shouldn’t be used to cripple links you place on your site either. It should be for things like blog commenters’ posted URL’s, which weren’t added by you, and therefore you may not want to recieve PageRank points. As for reciprocal linking, don’t bother. Google thinks reciprocal linking schemes are generally of little interest to the end user, and are therefore discounted when ranking pages.
Other than my minor complaints about some of the linking advice given, it’s a pretty good book.
Mar 14, 2008 by Matt | Posted in Featured, Marketing
When it comes to Search Engine Optimization, most bloggers fall into three groups.
- SEO Maniacs – They’re obsessed with improving their PageRank, and driving up their rankings using any means possible. A.K.A. John Chow before Google caught-on.
- The SEO Disinclined – The sort who just blog and ignore the SEO aspect.
- The SEO Neutral – Serious bloggers who do a little SEO, but don’t focus on it that much.
No matter which group you fall into, consider implementing the following tips.
Continue reading →