5 Biggest SEO Fails seen in 100+ Web Redesigns, and 2 to watch out for!

With Hyper Dog Media turning 11 this month, we having been looking back at the most common SEO
problems created by website redesigns. On some website redesigns, we’ve been on the team preventing
these SEO killers from happening. But in the vast majority of cases, we are brought in after a web redesign kills organic – and sometimes referral – traffic.

Here are four potential problems we see time and again

macbook-624707_12801. 301 redirects of old pages

As website technologies have evolved, so have URLs. An oft forgotten part of website redesigns is the 301 redirecting of old page locations. Traffic can shrink instantly, but the conventional point of view was that Google will figure it out. I’m not sure if that approach ever worked – for anyone – but especially now it is absolutely vital to 301 redirect old page locations to their new equivalents.

Not only should URLs be redirected from the previous version of the site, but of ALL previous versions of a site. Doing so helps these key visitor groups stay happy:

  • Visitors that have bookmarked a page: Don’t make these folks return to Google when they could stay on your site.
  • Search engines that have ranked a page: If a page is ranking well, you don’t want to lose that!
  • Webmasters that have linked to your page: Dead links tend to get removed. But also, 301 redirects preserve the rankings boost from these inbound links.
  • Visitors to other sites that have followed a link to your page: Referral visitors are notoriously impatient when links are dead.

Having dynamic content in various stages of the web’s development has often meant having various suffixes on URLs: .shtml, .pl, .php and/or many different parameters. Have you redirected these? Consider pulling ancient page URLs from analytics, archive.org, and even old backups. We’ve seen rankings boosts among clients that justify this level of obsession with 301 redirects!

2. Handling the development site

Blocking

During the development phase, Google can sometimes discover new website versions. It is fascinating the many ways Google can discover content… until they find and penalize for duplicate content!

Unblocking

You blocked the development version? Excellent. Now don’t forget to unblock when you go live! Whether it’s a robots.txt file, password authentication, or robots metatags on the pages, we’ve seen these blocking techniques go live with the new site. Make it part of your launch checklist to remove these. The consequences of lost indexed pages, traffic and rankings are severe and all too easy.

Removing

In the rush to launch a new website, the development server might be left behind. These old subdomains or subdirectories have a way of showing up, though! Make sure you nuke that old server (from space, it’s the only way to be sure!). Or, just take it offline.

3. 404 error pages

With larger web development changes, the 404 error page can disappear. Or it might start returning a 302 redirect! If your site has changed CMS, web server, or scripting languages make sure a friendly 404 error page comes up for missing pages, has analytics code on it, and returns a code 404.

4. Canonical tags

Canonical tags are a wonderful way to prevent duplicate content penalties. Unfortunately, some things can go wrong. We’ve seen sites that describe every version of a page as canonical, which is like communicating noise to Googlebot. It’s worse than saying nothing at all.

One valid implementation we’ve seen causing trouble is the use of relative canonical tags. We’ve seen a tag such as this:

<link rel=”canonical” href=”/services” /> show up on several subdomains/ protocols:

http://www.site.com/services

http://site.com/services

https://www.site.com/services

https://site.com/services

This can confuse Googlebot, as both pages are describing themselves as the canonical version. It’s best to use an absolute URL, and make sure your server isn’t spitting this out for both http and https: <link rel=”canonical” href=”https://www.site.com//services” />

5. Old dirty sitemap.xml files

The sitemap.xml file is an excellent way to communicate URLs to Google, along with freshness and priority. But we encounter many sitemap.xml files that are full of these problems:

  • Old, dead, missing pages
  • URLs that redirect
  • URLs that do not match what Google can crawl, or those listed in canonical tags

 

And here are two more problems we can see likely to happen in redesigns this year:

6. HTTPS Implementation

HTTPS was added as a small ranking signal in the last year, and many sites have made the switch. Or have they? Often image files, 3rd party scripts, or other elements mean that not all page elements are https. Google has let this slide, but recently Google said last week that may change.

7. Mobile Friendly pages

The mobile update ranks pages individually, so it’s important to test your site’s most important landing pages on mobile devices. But also check devices are indeed triggering mobile sites to show: Even big brands such as Noodles & Company can discover their mobile site isn’t being triggered.

 

Websites are meant to be changed. Not only do prospects expect fresh content and design at proper intervals, but search engines do too! With Google’s newest updates, there are more changes happening than ever. Change is good. Embrace change, and redesign that site – but be careful not to make these common mistakes!

 

PSST! Need a Free Link? Get a free link for your agency: Would you like our monthly take on the changing world of SEO delivered to your inbox?Subscribe to the Hyper Dog Media SEO Newsletter HERE! When you subscribe, each newsletter will contain a link idea for your business!

Like it? Share it!