Tag Archives: Google

Google and NoFollow

There has been some recent buzz about Google changing how they hande the NoFollow attribute, and how it’s the end of the world. SEOs are running around in circles, and there’s talk of moving to JavaScript and iframe commenting systems so Google can’t read them.

First of all, slow down. Google can read iframes and JavaScript. Google very likely is already programmatically diferentiating between comments and page content anyway, and tweaking their rankings accordingly. PageRank is not a cut-and-dry “links in minus links out equals rank.” It’s a complex system involving a lot of calculations on Google’s part. I imagine they value links differently depending on what sort of link it is.

Second, stop worrying about it. Google’s business is to provide good results. If everyone tries to cheat the system with a bunch of silly schemes (such as PageRank sculpting and JavaScript links) Google will just change things again. If everyone plays by the rules, and doesn’t worry about their ranking too much, it all works out in the end.

Google Analytics API Launched

Google has launched an API for Google Analytics. From what I’ve seen so far, it’s a fairly large XML API (with OAuth and basic HTTP authentication support) that allows you to programmatically gain read-only access to virtually any data that the main Google Analytics site can display.

The API will allow developers to extend Google Analytics in new and creative ways that benefit developers, organizations and end users. Large organizations and agencies now have a standardized platform for integrating Analytics data with their own business data. Developers can integrate Google Analytics into their existing products and create standalone applications that they sell. Users could see snapshots of their Analytics data in developer created dashboards and gadgets.

It looks a bit technical, and I haven’t had a lot of time to look at it yet, but you can read all of the documentation over at the Google Code page. There are no hard API limits, like with the Twitter API, but Google reserves the right to block requests if they are excessive, as is typical.

Hopefully we will be seeing some new desktop/iPhone/etc applications for keeping up with our statistics.

Google Adds Ranking Data to Referrer String

Google has been rolling out changes to the way their referrer strings are structured. They are moving from a simple URL that shows the search query to a more complex one with some extra information that may be valuable.

Starting this week, you may start seeing a new referring URL format for visitors coming from Google search result pages. Up to now, the usual referrer for clicks on search results for the term “flowers”, for example, would be something like this:


Now you will start seeing some referrer strings that look like this:


Patrick Altoft of BlogStorm has noticed an interesting addition to the string. He thinks that the cd=7 part stands for “click detail 7,” and is the ranking for your page. So if someone clicked through from Google to your site, your analytics software could collect the referrer string, and determine not just what the user searched for when they found your site, but what the page ranked!

This is valuable information for Search Engine Optimization, and makers of traffic statistics software, certainly.

Google Axes AdSense “Video Units”

If you’ve used AdSense within the last few years, you may have heard of their Video Units. They’re finally being discontinued, and frankly I’m not surprised.

Video Units always seemed strange to me. Basically they would scan your pages for keywords like usual ad blocks, and display text ads as usual, but the ads would be displayed along with YouTube videos chosen based on the same keywords. So you end up with automatically chosen videos being displayed on your site, along with some ads.

I’ve always thought of video as content, not a supplement to advertising, and I like to be able to control what content goes on my site. Virtually random videos seem like an odd idea to me.

Plus, wouldn’t that mean that you (and Google or course) are making money off someone elses’ videos, while the creators don’t get any compensation? That hardly seems fair. (Warner Brothers, or some other hollywood company, certainly wouldn’t think so if their clips came up in the units now and again…)

Google Canonical URLs

Finally, the solution for our duplicate content worries is over! Google now supports a new method to specify a canonical URL for your page. This “hint” suggests that Google use this page as the original, and ignore duplicates elsewhere on your domain.

You simply add the fully W3c-compliant <link> tag in your header, and have it point to the permalink for a given post. Google will most likely rank that page in their results, and ignore others. That should help out your ranking overall.

<link rel="canonical" src="http://www.example.org/your/permalink/page/" />

Obviously you’ll want some way to integrate this with your CSS. Some will want to roll their own solution, but if not, there are already prefab options available.

Google Responds to Criticism of FeedBurner Migration

As you may already know, Google has set a deadline for you to migrate your feeds over to their new system tied to your Google Account. The move hasn’t been as smooth as it could have been so far, and there has been much criticism over it. I’ve certainly done my fair share of complaining. (My stats were at 20% for several days, and 1and1 complains that the CNAME for MyBrand is too long.)

Mashable was granted an interview with Steve Olechowski, co-founder of FeedBurner turned Google employee. The Q&A session ended with fifteen answers to frequently asked questions about the transition. Sadly, many of my questions have been left unanswered as of yet.

A very large percentage of the blogosphere uses FeedBurner to cache their feeds, so this topic is one to watch. The service fits well into their business, and should open up some interesting opportunities in the future, and quite possibly the widespread adoption of ads in RSS feeds.

Read the full Q&A at Mashable.com.

Transfer Your FeedBurner Feed

If you remember a couple years ago, way back in the summer of 2007, Google bought the venerable feed mirror and statistics company FeedBurner. The Big G has since been slowly migrating everyone’s accounts over to their own servers, moving away from the old FeedBurner ones.

Since Google’s acquisition of FeedBurner, Inc. on June 1, 2007, we have been moving the FeedBurner application to Google hardware, software, and data centers. This allows the application to scale and perform like most Google applications and integrate easily with other Google platforms. It also means more reliability in delivering your content, analytics, and monetization, as well as a more secure and consistent experience for your users.

In order to provide an integrated experience and to support the new features we have planned for our feed platform, as well as to improve security, it is necessary for logins to be handled via a Google Account.

Google has set a deadline for you to move your feed now. You have until February 28, 2009 to transfer your feeds. Pro Blog Design has a tutorial on how to do so.

Also, check out Google’s FAQ page for further information.

Google AJAX Libraries API

Do you use a JavaScript library — such as jQuery, Prototype, or MooTools — on one (or more) of your websites? That probably adds a good 18-120 kilobytes to your pages’ total size, adding more time to users’ download time.

Now, how many websites use that same framework? How many websites make use of jQuery, or example? A lot. That means any given user could be downloading the JavaScript files multiple times in a day, as different sites require it. What a waste of time and bandwidth.

Google has an interesting solution. They host multiple versions of several major JavaScript libraries on their servers for web developers to take advantage of. This offers several advantages. Their servers are quick and have wide pipes allowing for very fast downloads, for one. The real benefit is caching.

If a user visits several sites that reference jQuery (or another library) from Google, their browser caches the file and will only load it once, reusing the cached file on the other sites when they are loaded. This is because you’re referencing a file from ajax.googleapis.com instead of your own domain, and if multiple sites reference it, the browser remembers it already downloaded the file and uses the local copy.

Continue reading →

AdSense For Domains

It was only a matter or time.

Domainers have long put AdSense blocks on their parked domains, in an attempt to make some extra cash off the higher-traffic ones. This practice is technically against the AdSense terms of service, and isn’t really fair to the advertisers, but Google had not done anything about it. After all, they get a cut of the deal.

Now Google has made available, to all users of the AdSense network in North America (other continents to follow), AdSense for Domains, a “legitimate” way to monetize parked domains.

Continue reading →

Google Tells You How to Get Free Links

Google recently added a useful new feature to their Webmaster Central portal, which Google employee Matt Cutts says can help you get some extra links. It allows you to see dead URLs that sites are linking to on your site (a.k.a. pages that don’t exist, and return 404 pages). Essentially, after getting a list of those URLs, you can set up some good (and search engine optimized) content at those URLs. Voila, extra links.

Let me back up and give you a little history. When someone comes to your site’s webserver and asks for a page that doesn’t exist, like http://www.mattcutts.com/asdfasdfasdf , most web servers are configured to return an HTTP status code of 404, which means that the page was “Not Found.” If someone links to a page on your site that doesn’t exist, most webservers give a pretty sucky experience: visitors usually land on a pretty useless page, and search engines might not give you full credit for those 404 errors.

Now Google’s webmaster portal lets you see who is linking to your 404 pages. Once you register your site, click on Diagnostics, then Web crawl, and select “Not found”.

Read Matt Cutts’ full post.

Also, BloggingTips.com has a more in-depth post on making use of the tool.