Responsive Design Best Practice

A wee while ago I wanted to create a new single-page landing site for one of my online properties. Just a logo, company name and contact details. Nothing more, nothing less.

Now, because I’m a really cool guy and I’m down with all the latest jargon and web stuff I decided it would be a responsive website.

Not so much responsive in the sense that I’ll ever respond to inquiries from there any more than I did when there was just a logo on the website and no details you understand!

This is responsive in the sense that all the hip crowd using mobile devices to access the site will get a nice experience and not have to scroll or zoom around to see the three lines of text on the site.

Responsive design is not a new idea and I’m certainly not going to claim to know much more than someone who looks up the term on wikipedia about the dark art of CSS3 @media rules.

In fact, I’d like to encourage everyone else to stop claiming they’re experts as well!

I thought about buying a single-page website template and slapping my info on it but because that would involve parting with money for something I can do myself I decided that roll-your-own was a better plan. And it’s just one page, right?

I figured someone would have a good guide on the rules for responsive design so I put ten cents in the Google roulette machine and crafted a search for responsive design best practice.

If you search for ‘best practice responsive design’ Google says there are about 7,440,000 results which took a grand total of 0.33 seconds to dig out of the dusty corners of the web.

Without reading them I’m guessing that there are probably about 1,488,000 unique and differing opinions to be had in those results about what in fact the best practice is.

And I’m being pretty generous there, allowing for one fifth of all the results to actually be something new and interesting.

Another problem I found was a number of what I thought were reputable sites quoting other similar well respected sites that had bugs in their @media statements and simply didn’t work on the small range of devices I had to test on.

So, the quoted best practice was actually pretty poor practice if you used an iPhone 4S, or Samsung Galaxy 3 which didn’t like the overlapping @media specs defined in some CSS which I think originally came from smashing magazine but so many people quote it that I have no idea where it originated!

So; I suppose seeing as I link baited with the title, you’re wondering what my best practice advice is for responsive design? Here goes:

Get off your adjustable office chair and learn how CSS works, understand what the @media max-width, min-width and pixel-ratio actually do and test it on a good sample of devices!

Authorship, Small words and little tags that do good

How’s that for a confused, or at least confusing article title?

I posted a blog article last week about some DIY stuff which wasn’t particularly noteworthy and truth be known I just wanted to post something to see if I could test a fix for the authorship tags on the site.

Back when authorship was just a toddler in the Google suite of obscure and not so obscure tags I went with some advice from somewhere to put a link with ‘rel=author’ on every blog post page to my profile page and slap a link on the profile page to my Google+ profile and I’d be done.

That worked for about, well. I’m not entirely sure it did. For exact match entire passages and phrases from my posts I’d sometimes seen my face staring back at me from the search results, but mostly nothing changed.

At work however we have a blog contributor who is consistently showing up as his miniature self smiling beside search results for his posts even though none of the requisite link tags are in place.

We have no links to his Google+ profile anywhere on the site and the only part of the authorship puzzle that’s been met is the contributor entry on his Google plus page.

I’m not going to go into any detail about how to make authorship work, there are a lot of good articles around the web on how that can be done and Google’s own help pages are as good as any now that it’s well established.

After the page was indexed fully I ran a range of different test searches which told me that authorship was working along with confirming a bunch of other odds and sods that should be common knowledge if you’re in the online marketing game.

What I found interesting though is how subtle search phrase changes changed when authorship shows up in the results or when it doesn’t. Equally I discovered some small words that made differences as well when I normally wouldn’t expect it.

So, without further delay, a pile of search results screenshots with comments for each…

130825-01First up we have a mixed up phrase from the blog post, and I’m top result. That’s mission 1 achieved, the page is indexed and we can move onto testing some other ideas out.

As a group of keywords ‘portable risks side note’ is not that stunning but you can see immediately how less than ethical SEO companies might convince a customer that a set of keywords are critical and get a rank for that combo under the guise of long-tail search. Followed quickly by the bill and a rapid exit to the nearest hills.

Long story which I can’t really post about, but I recently helped a friend with exactly that problem who’d paid handsomely for an SEO consultant to get their pages to rank well for a totally useless set of keywords.

This stuff is not rocket science but if you want to be top hit for ‘used car’ that is a whole other can of worms and requires a lot more effort as the content I’m using for these test searches is not really what happens in the real world.

An interesting thing to note about this search result is that the snippet of text is not the meta description for the page.

SEO tidbit #1 from this blog post: No matter how much time you spend crafting the description tag it may not show up in the serps these days if the search terms don’t match the description.

Oh, and the authorship worked. Who’s that attractive looking chap beside the search result?

130825-02

I did a bit of messing about with combinations of keywords and found that this one still gave second place result but dropped my authorship. Again the search phrase itself is pretty meaningless but it highlights something about Authorship.

If Google doesn’t think who wrote the article is that important to the search results you wont get the extra credibility in the search results page. That means if you’re struggling with testing the markup pay a bit more attention to what you see in Google’s structured data testing tool and what you’re content is about rather than just trying to get your photo up on what you think the page should rank for.

Note that the snippet is different again. Still nothing from the description tag. Instead this time we have a mash-up from two paragraphs highlighting where the algorithm says the keywords were found within the body of the content.

130825-03

A simple change here. Removed ‘on’ and there’s 70,000 or so more results found in the index but it doesn’t change the top few results. The fact is that small words sometimes don’t matter, despite how much your english teacher might have insisted otherwise.

Clearly if you were prepared to click a few more pages into the results you’d see a difference though, so let’s try something different.

130825-04

Same words with the ‘on’ back in the mix with a different order and we’ve dropped a couple of hundred thousand potential results even though the top three results have not changed.

So, the order of small words does matter. It would seem that the combinations of ‘on side’, ‘on note’ and ‘note on side’ are probably more common in content than ‘on portable’.

I’m obviously mincing my words, almost literally, to make a point here.

When in the English language you write, order important it is. Unless you’re Yoda that is.

Google have long said that well crafted content is important and phrasing that is common to your target audience is going to rank better than the best writers missive or random words on a page that used to be common in the AltaVista days.

As a total aside, if you’re interested in SEO and don’t know what I mean by AltaVista days, you missed out on a golden age for SEO consultants that allowed people to do all sorts of things that would get them kicked from the index of even the slackest engine now. Ahhhh, those were the days.

130825-05

Another shuffle of keywords and the third result has vanished down to about position six although cbsnews and I are still batting pretty well for some obscure text.

‘Notes on’ in this case is what starts the page title tag and the first H1 on the page for the result that’s popped up to number three on the hit list.

That right there is old-school SEO advice. Have relevant title tags and heading structures with text people will search for. If your page is about tomatoes having the page title ‘Shoe leather replacements for tomatoes’ and the first H1 tag the same will probably get you more search traffic for shoe leather than it will tomatoes.

130825-06

One more shuffle of keywords and this time a more correctly constructed phrase from an English point of view and it’s got four of the five words in the same order as my post so the dashing fella on the left of the search makes a sudden re-appearance.

So even though this is not an exact match to the text the algorithm calculates that the order makes better sense and is more likely to be well structured content deserves that little bit of extra attention the authorship gives.

cbsnews.com is still there but lets face it… If my site had as much link juice as a major news site I’d have Google adsense on here and be counting my sports cars parked in the garage of my French Riviera holiday home not writing this for entertainment.

The osha.gov site appearing there is interesting, but again .gov sites have credibility oozing from their TLD so nothing surprises me when I see them showing up in search results.

130825-08

Now for a little image searching using ‘testing FT-857’ seems like a pretty good image search term if you’re into amateur radio and want to find out about the FT-857.

The image is result four which is a good slot and your SEO handbook will tell you the image names are all important for such things and the alt tags. Don’t forget the alt tags.

In this case the alt tag is indeed ‘Testing on the FT-857’ and searching for exactly that will bring the image up to the top hit, not the lowly number four slot.

What about that image name? It’s actually ‘130818-171341-0001.jpg’.

Correct and contextual naming of images is a good idea but don’t forget the auxiliary tags around images. The only place FT-857 appeared before this post on my entire website is in the alt and title tags for that image.

130825-09

Better than that, this search gets me top hit for a a combination of keywords from the page and FT-857 which only appears in the alt tag for the image and the title tag for the link to the popup copy of the image.

If I’d bothered to name the image in a useful fashion I could probably rank for some useful phrases as well as that one. This is basic stuff but day in day out I see SEO advice about all sorts of other things. Getting the basics right on this is going to get me traffic for people testing FT-857 Radios with power pole connectors.

130825-10

One last screenshot to round out the observations for the evening. An image search for ‘gel FT-857’ showing a top hit for my photo. The word ‘gel’ is not in the alt tag for the image, but it is in the title attribute for the link to the popup.

If you hang plain english title tags on links to images and content you can improve their positioning for key words and phrases in the linked content or in this case can give you a ranking for a term that does not exist anywhere in the content apart from the tag.

By way of a disclaimer and for the sake of completeness: I did these searches from a New Zealand IP on www.google.co.nz, using google chrome in incognito mode to avoid search history slanting the results. Your results may vary if you’re in a different country of have substantial search history for similar terms or sites. Some of them were on my Ubuntu Desktop and the balance on a Windows 7 laptop, because I happen to be sitting in front of the telly pretending to watch something, so the fonts look slightly different in some of the screenshots.

(I did do a bit of testing from a US IP using google.com in incognito mode and got very similar results, although the serps were slightly different the observations would be the same. If you’re reading this more than a week after I wrote it the search results will probably have changed, the web is a dynamic place.)

Website Indexation on Google Part one

The web site at work has many issues, and one of the slightly vexing ones was that a site: search on google only showed 540 odd of the 1100 pages in our site map. Google webmaster tools was showing 770 pages indexed, but that still left 400 pages missing in action.

I’m a realist and understand that google will never index everything you offer up, but we also have the paid version of google site search and it can’t find those pages either which is a little more annoying as that means that visitors who are already on our site might not be able to find something.

The real problem with partial indexation is where to start. What is it that Google hasn’t indexed exactly? How do you get the all seeing google to tell which of the 1100 pages are included, or not, in organic search results?

I spent a few meaningless hours on the Google webmaster forums plus a few more even less meaningful hours scraping through various blog posts and SEO sites which led me to the conclusion that either I was searching for the wrong thing, or there was no good answer.

At the tail end of the process I posted a question on the Facebook page for the SEO101 podcast over at webmasterradio.fm, which incidentally I recommend as a great source of general SEO/SEM information.

After a bit of a delay for the US Labour day holiday the podcast was out, and I listened with great interest in the car on the way to work. Lots of good suggestions on why a page might not be indexed, but no obvious gem to answer my original question. That being how to tell what is and what isn’t being indexed.

Luckily for my sanity Vanessa Fox came to the rescue in a back issue of ‘office hours’ another show on webmasterradio.fm. Not a direct solution to the problem, but an elegant way to narrow things down, by segmenting the sitemap.

One Site, many sitemaps

One Site, many sitemaps

In a nutshell; chopping the site map up into a number of bits allows you to see where in the site you might have issues. With only 1100 pages I could probably have manually done a site:search for each URL in a shorter time than I wasted looking for a soltion, but then I’d not have learnt anything along the way, would I?

So leading on from that, I thought I’d post this here on my site with one or two relevant keywords so that anyone else with the same question stands a chance of getting to the same point a little more quickly than I did!

As for the pages that were not indexed? A chunk of our news pages, which may be due to javascript based pagination of the archives, and a fair chunk of the popup pages which I’ve yet to full investigate.

Onwards and upwards.

Underscores vs Hyphens and an Apology

If you read my blog via an RSS reader you probably noticed at few odd goings on earlier today. I changed a few things on the site and all of the posts going back to last year appeared as new again, even if you’d read them.

Sorry ’bout that, but there was a method to my madness, or at least a method to my fiddling.

Although it’s not entirely obvious, one of the main reasons I started running this site was to mess around with search engine optimisation and try out the theories of various experts who also run a blog but with a great deal more focus that me.

To that end, I’ve re-written the code that generates my rss feed, and included some in line formatting to make it easier to read. Now when you read the blog from a feed reader it should look a bit more like the website, give or take. Well, more give than take.

While some of the changes were purely cosmetic, I also changed the URLs for all my blog posts.

The new URLs is the bit that caused them to pop up as new posts in at least feedburner and Google reader. The change to the URLs was to remove the dates from the URL itself, and replaced all the underscores with hyphens.

Removing the dates was because it just looked ugly compared to the WordPress style of using directories for the year / month. I don’t use WordPress, but decided that if I was going to mess with all my URLs I might as well change to nicer looking ones while I’m at it.

If you do some searching for “hyphen vs underscore in URLs” using your favourite search engine you’ll find a bunch of writing, with the general wisdom falling on the side of hyphens. In fact as far back as 2005 Matt Cutts, a developer from Google, blogged about it. [1]

So why might you ask did I use underscores? Well. Ummmmm, cause it’s what I’ve always done is the only answer I’ve got.

A bit more searching around told me that the results for at least Google are apparently different between the two methods. Using underscore caused URLs to be considered as phrases and hyphens were more likely to result in search results for individual words in the URL.

This sounded like something worthy of some experimentation so wearing my best white lab coat I created some pages on a few different sites I look after which were not linked to the navigation but were listed in the xml sitemaps.

I mixed and matched the URLs with underscores and hyphens and used some miss-spelt words and phrases. There were a total of 48 pages, spread over 8 domains, which were all visited by googlebot a number of time over an eight week period.

I had a split of twelve pages with hypens and matching content, twelve with hyphens and unmatched content, and the same split with underscores. Where the content matched I used the same miss-spelling of the words to get an idea of how well it worked. All six of the sites have good placement of long tail searches for their general content and get regularly spidered.

The end result is that most of the hyphenated URL pages that did not have matching keywords in content or tags were indexed against individual words in the URL (eight out of twelve). All of the pages that had hyphenated URLs and matching keywords in the content were indexed against those words.

The pages with underscores and non-matched content didn’t fair so well. Only four out of the twelve pages got indexed against words in the URL, although nine of them were indexed against long-tail phrases from the URLs. Pages with underscores and matching content ranked lower for keywords in the URL than the hyphenated ones although that’s not an accurate measure as they were miss-spelt words on pages with no back links.

So, end result: The common wisdom of using hyphens would appear to be valid and helpful if you’re running a site where long keyword rich URLs make sense, and the strength of the individual keywords might be more valuable than the phrase.

If you’re going for long tail search results in a saturated market where single keyword rank is hard to gain, you might want want to mix it up a little and try some underscores, it certainly can’t hurt to try it.

One thing to note for those not familiar with why this is even an issue. Spaces are not valid in the standard for URLs although they are common in poorly or lazily designed websites. If you’re really bored you can read the original spec by Tim Berners-Lee back in 1994 [2], or the updated version from 2005, also by Mr Berners-Lee. [3]

The long an short of that in this context is that you can use upper and lower case letters, numbers, hyphens, underscores, full stops and tildes (‘~’). Everything else is either reserved for a specific function, or not valid and requires encoding. A space should be encoded as ‘%20’ and you can probably imagine how well that looks when trying to%20read%20things.

If you type a URL into your browser with a space the browser is converting it to ‘%20’ before sending it down the pipe for you. You sometimes see these encoded URLs with no just spaces but other random things in them, and they can be the cause of random behaviour for some websites and software, so it’s best to avoid odd characters in your URLs.

Apologies again if you got some duplicates in your RSS reader over the last few hours. I’ll try not do that again, and it’ll be interesting to see if a couple of pages that were being ignored by Google with underscores get indexed now.

References:

Matt Cutts Blog posting from 2005 http://www.mattcutts.com/blog/dashes-vs-underscores/
1994 spec for URLs http://www.ietf.org/rfc/rfc1738.txt
2005 update to URL Spec http://www.ietf.org/rfc/rfc3986.txt

Google Location, the best of results, the worst of results

Google announced on their official blog a couple of days ago that location was the new black. Enhancing search results by allowing the surfer to rank results ‘nearby’, or pick another location by name.

This is just a continuation of the direction on-line technologies have been moving with social media leading the charge. Services like foursquare giving people their constant location fix. Twitter has even gone local allowing you to share your location in 140 character chunks.

Up until now the only real down side of this location hungry trend has been the exact same thing touted as the benefit of telling the world where you are. Namely that the world knows where you are. Privacy concerns are rife as the mobile social media crowd go about their daily lives in a virtual fish bowl.

pleaserobme.com highlights this by aggregating public location information from various social networks and figuring out if your house is empty. How long before insurance companies wise up and use Social media as a reason for not paying out on your house insurance? “But Mr Jones, you told the entire world you were away from your house, you encouraged the burglar.”

The last thing on earth I would want to do is share my location real time with the world but I was keen to experience the Google location search to see how it actually effects search results.

The impact of location based search is going to be far more noticeable in the real world than the failed insurance claims of some iPod users.

The Google blog entry says that this is available to English google.com users, but we don’t have it here in New Zealand yet. We might have been first to see the new millennium, but not so much with Google changes.

To get my Google location fix I used a secure proxy based in the US and took in the view or the world from Colorado. Pretending to be within the 48 States is handy for all sorts of things.

LocationI did some searches from a clean browser install on a fresh virtual machine, so that personal search preferences or history would not taint the results. I then set about testing some long-tail search phrases that give top 5 results consistently for our website at work.

No surprise that I got essentially the same results as I do here in New Zealand, but with more ads due to targeted adwords detecting that I was in the US of A. What was disturbing was that selecting ‘nearby’ knocked our search result down past the tenth page of Google.

We sell products to the whole world, and do not have a geographical target so the location search will clearly have an impact on our organic results as it rolls out. A business which is targeting a local area such as a coffee shop or Restaurant might well benefit from the location search, assuming that Google knows where your website is.

But there’s the rub. How did Google decide our website was not near Colorado? Our webserver lives in Dallas TX, our offices are in New Zealand and Thailand, and we regularly sell products to over thirty countries.

Which leads to the impact of location for web developers and the SEO community. How do you tell Google what your ‘Local’ is? I messed about with location names, and putting in ‘Christchurch’ where our business is based got our long tail hit back up to the front page, but only a fraction of our business comes from Christchurch, dispite it being where our head office is.

I suppose anti-globalisation campaigners in their hemp shirts and sandals will be rejoicing at this news but I’m not so sure I’m going to be celebrating this development with the same enthusiasm.
A quick search for meta-tags or other methods of identifying your geographical target came up dry, and even if there was one we can only gently suggest to Google that it index and present things the way we as web site owners want.

When the dust has settled and the ‘Nearby’ link is clicked Google are the only ones who know what the best results are. It just might be that their best just became your worst if your business has a broad geographical target and weak organic placement.

Yahoo plus Bing, Strange Bedfellows

The news that Bing is set to become the search engine behind Yahoo is quite old now. The ten year deal between number two and three in the battle for search dominance was cut back in July this year.

There’s nothing too strange about Microsoft and Yahoo doing business together on the face of it, this came a bit of a year after a failed attempt by the Seattle software hawkers to buy out Yahoo lock stock and flickr pages for a cool $44.6 Billion in change.

What is strange is the positioning of Bing search results in the Yahoo pages.

Existing Yahoo search users have made a conscious effort to not use Live! Search and it’s successor in mediocre search result delivery, Bing. How are they going to react to Steve Ballmer sneaking back into their lives in 2010 when the deal is set to become reality?

Looking at this through my rather rose coloured glasses the bulk of Bing faithful is probably made up of three groups who can clearly be defined. Zealots who also lusted after attendance at Windows 7 Launch parties, ignorant users who don’t know how to change their default search provider, and interior decorators who are drawn to the elegant interface but secretly wish they could search for mis-spelt words but don’t change for fear of affecting the Feng shui of their office.

What portion of Yahoo users do you think changed their default search provider in their shiny IE8 install because they simply didn’t want to use something provided by Microsoft? While Microsoft know the power of bundling in making Bing the default for Windows 7 and IE8, they also know all to well that a portion of their customers resent them simply because they are a near monopoly supplier in their market.

So, will Bing and Yahoo joining forces and realise conversion of their 8% and 20% chunks of the search market into 28%, or will it carve up Yahoo’s 20% for Google, Cuil, Ask, and all the other players out there.

Interesting times.

Bing checks in after 13 days. Dave Collins’ Blog

Finally some action from Bing in my search engine race, just after I said I’d given up. Thirteen days is not a startling performance by any measure, and there appears to only be the home page in the index so far, but that’s at least a good start.

Searching for phrases on the home page works, so it’s fully indexed, and the content it’s indexed appears to be from yesterday. What’s more exciting is the old URL’s which were invalid have now vanished from the site:trash.co.nz search, although the one disallowed in robots.txt is still there.

While we’re on the subject of Bing… I read an Interesting tid-bit on Dave Collin’s blog about Bing providing a twitter search facility. Seems like I might have spoken too soon when I pulled fun at Bing about not being real time. A quick play with the beta test Bing/Twitter search engine shows that Bing is at worst 2 minutes behind Twitter.

Dave’s blog can be found at http://blog.sharewarepromotions.com and has some good general ‘net marketing information. You can follow him on twitter at http://twitter.com/TheDaveCollins.

There’s a bit of information on the official Bing blog about their partnership with Twitter [here] if you’re into a longer read

So much for a Search Engine race

I’ve just finished watching a re-run of the BBC’s Top Gear. Richard Hammond took on an RAF Eurofighter in a Bugatti Veyron in one of their classically contrived races.

The Eurofighter came in first, but the Veyron wasn’t too far behind. It was a race of sorts, give or take. I only wish I could say the same for my attempt at search engine spider racing.

Google came in first by a country mile, with a complete indexing done in about 84 hours. We’re 10 days, a full 240 hours, into the race now and Yahoo has managed to get a grand sum total of one page indexed.

As for Bing. Well.

Bing is hanging out down at the start line with it’s eye candy interface clinging onto some pages that have not existed on this domain for at least two years.

While it is possible that Microsoft have developed a time machine, I think it’s more likely that msnbot doesn’t know an http 404 response from a mouse pad. Combine that with an inability to honour robots.txt and I’m not sure the folks up in Seattle know for sure if they’re running a search engine or a cake stall.

There has been a buzz in the blogsphere about real time search for a while, with twitter leading the charge in delivering on the dream. Twitter of course has the advantage that all the content it needs is provided on it’s doorstep by hordes of twittering users.

Back in the world of conventional search engines the battle to gather content is fought by the spiders. Clever robots sneaking around the web on the constant lookout for new or changed stuff. Indexing, ranking, summarising. The unsung heroes in our digital world even.

No prises for guessing how poor the real-time search ability of Bing is going to be if it takes longer than 10 days to index data that was handed to it on a platter, and 2 years to remove content that has been returning a 404 for that long.

My website is an internet backwater, I’m quite realistic about that little detail, but if Google pays attention to me, I’ll focus my SEO attempts on Google and ignore the other bit part players for the time being.

Bing and Yahoo slow off the mark

Well, in my humble opinion it’s been a very poor showing from Bing and Yahoo in my search engine race so far.  Google has now spidered, and indexed pretty much the whole site, but Bing and Yahoo have failed to fully index even the home page despite visiting the site a couple of times.

Yahoo is coming in runner-up as it has made a start on the process, with their site explorer showing the new <title> tag from the site.  That’s a clear step up from Bing which still shows URL’s which have not functioned on the site for a number of years.

Searching for site:trash.co.nz on Google shows me 35 listings, which includes some of the old obscure stuff which has been given a new burst of life due to inbound links and the effect of having the 404 page responding with valid HTML as I described in my previous post.

Bing gives 7 results, one of which is disallowed in robots.txt, the old home page entry and five links which were removed from the site in 2004 when I sold my hosting business, although I believe there may have been valid pages on those url’s up until 2007, so we’ll give it the benefit of the doubt on that.  Bing gives the same results for www.trash.co.nz and trash.co.nz as does Google.

Yahoo takes you to the site explorer page when you search on site:trash.co.nz and the results speak for themselves. 3 URL’s, all of them with old content, but if you specifiy www.trash.co.nz as the url it does show the new <title> so I think it’s going to come in second place, leaving Yahoo out in the search engine cold.

I was surprised that yahoo hasn’t figured out that www.trash.co.nz and trash.co.nz are the same thing mind you, although that may well come with time as it’s databases update.

Almost 6 full days after submitting the sitemap to the big three, and it’s pretty apparent that Google’s spider and indexing process is far more effective than either of its cohorts.

Takeaways for today:

  • Submit cnames for your sites separately to the Yahoo spider, it treats them separately, or at least when partially indexed it does.
  • Don’t expect to see action in under a week from Bing or Yahoo when introducing a new site to the web.  (Once it’s indexed that may be different, as it should monitor the sitemap, well see!)

Houston, we have a winner in the search engine race

The race is in its final stretch now, with Google coming in the winner sometime over night, NZ time.  The new content of a few of the pages is up there, and searchable.

ref: The search engine race, Content vs Presentation

Google wins

Google wins the indexing race

Not only that but if I cherry pick some phrases from my blog posting from last night I’m hit number one and two which re-enforces some of the basic precincts of search engine optimisation.  What’s also interesting is that the content snippet that Google presents under the title is different for a given page depending on what you searched for.

Hmmm, SEO theory #321 out the door.  The meta description is not always used by google to present your results.

See the screenshots below.

SERP

SERP 1 – Google

Serp

SERP 2 – Google

The screenshot on the left shows the search results for ‘google lips tightly sealed non-disclosure’. Top hit is trash.co.nz/blog.html with an extract from the blog posting from yesterday that had that text in it.  The second hit is a shortened version of the link I posted to Twitter, going to the actual blog posting.

The second hit is a direct one to the blog posting via my link-shrinker.  This hit shows the description meta-tag verbatim as common wisdom would suggest.  The link was posted to Twitter about 10 minutes after I posted that blog entry last night, so it got spidered, indexed and searchable in under 12 hours which tells us that Google definitely plays favourites.

So, come on down screenshot number two.   Searching for ‘trash.co.nz blog’ gives me the two top hits again, but this time it’s given the meta description tags for both hits, even though the first one is the same as the first in screenshot two.  Hit number three is my twittered link again.  Nice.

The other interesting thing about this is the dates that appear at the left of the descriptions.  They are not in the meta tags, but boy-o-boy do they improve the effectiveness of the results presentation in Google.
Note that the date shown for http://trash.co.nz/blog.html is different for the two result sets.

I’m picking they came verbatim from the xml sitemap in the case of screenshot number 2, and in the case of screenshot number 1 google has done something clever and used the change date for the target of the link in the content.

Takeaways for today:

  • If you don’t already have a valid xml sitemap, what on earth are you doing reading this?  Get to it!
  • Meta description tags are all very well, but if your content is tag-soup you may still get crap results presentation.  Valid, clean HTML gave me two sets of clean results.
  • Googlebot plays favourites with twittered links, one would assume due to link popularity rules/formula that are secret squirrel stuff at Google HQ.
  • It takes about 84 hours for google to spider, index and summarise new content.

In the next couple of days it’s going to be interesting to see how long the old home-page text persists in Google’s cache, and what we get as results from bing and yahoo as they bring up the rear.

Adding onto that it’s going to be interesting to see what the refresh period for changes to the site is going to be now that the new sitemap is being used by google.  Let the SEO games begin!