My Blog

A bit of SEO, a pinch of web design and a drop of life

Technical SEO Facts For Beginners

by | Mar 7, 2021 | SEO | 0 comments

It’s no secret that with SEO, content is king. However there more to good SEO than just having good, unique content. Technical SEO is exciting and requires excellent problem-solving skills and critical thinking.

This article will cover some technical SEO facts that every beginner should know. There’s no in-depth technical how-to information here, just some simple facts that, if implemented correctly, could help your website rank higher in the search results.

1. Page speed is Queen

If content is king, then page speed is surely Queen. Most people think of slow loading times as a pain in the arse for users, but having a slow website has far wider consequences than just annoying users. Page speed has been a ranking factor for a long time. Google has even said that it may one day use mobile page speed as a ranking factor for mobile search. So if you want to rank higher, and keep your users happy, why wouldn’t you want a fast loading website?

The simplest way to check your page speed is with Google’s own PageSpeed Insights tool. It gives a good analysis of your sites speed and shows you some key areas slowing things down. The new page speed tool also has a mobile-focused area. This tool will check your sites loading speed specifically on mobile and even uses a mobile connection. But don’t read too much into what this tool tells you. It’ll often highlight ‘issues’ on your site that Google itself put there (test a site running Analytics code or a Google font and see for yourself). For a proper in-depth look at your loading times, I recommend using a variety of tools and looking at issues that appear in more than just one of the results first.

1. Robots.txt files are case sensitive and have a specific place to live

Did you know that the robots.txt file must all be lower case? If you capitalise it (Robots.txt), it won’t be recognised! Additionally, crawlers will only look in one place for a robots.txt file, and that’s the sites home directory. If it’s not found there, crawlers will usually move on and stop looking for it.

3. Infinite scroll is not good (usually)

That’s because crawlers can’t make use of it. And if a crawler can’t see whatever is hiding behind the infinite scroll, it may not rank for it.

When using infinite scroll, make sure you have a paginated series of pages as well. Make use of the replaceState/pushState on the infinite scroll page. This is a really handy little trick that many web designers simply aren’t aware of. You can check your own site by looking for rel=”next” and rel=”prev” in the code.

4. No one cares how you structure your site map (including Google)

As long as it’s an XML file, you can structure your sitemap any way you like. Breaking down your links into categories, products, posts or any other way you can imagine will have no effect on how Google crawls your site. So by all means, make it pretty, but do it for you, not for the robots.

5. Using noarchive tags doesn’t hurt your Google rankings

This tag is great for stopping Google from showing cached versions of your pages in search results, but it will not negatively affect the pages overall ranking. Simple as that.

6. Home first, the rest later

It’s not always true, but generally speaking, Google will usually find your home page first and crawl it. There could be an exception if there are a large number of links to a specific page on your site. But if we’re just talking about Google naturally finding your site, odds are it’s going home first.

7. Internal and external links are not equal

Everyone knows that it’s good practice to internally link your pages and link to external sites sometimes. But not everyone knows that these links are weighted differently by Google. Spoiler alert: external links are thought to be worth more!

8. Crawl budget exists, and you can check it

Crawl budget is the number of pages a search engine will want to crawl in a certain amount of time. Once you’ve used your budget, the crawler will leave your site. This could leave valuable pages not crawled. You can get a good idea of your budget using Google Search Console, and even try to increase it. Check the crawl stats for your site and look for the activity in the last 90 days.

9. No SEO value? Disallow that page!

Virtually every site will have pages that offer no SEO value. They may not even have completely unique content or be particularly well written. Such pages can include privacy policies, terms and conditions, cookie policies, expired promotion pages and so many more. Knowing point number 8, it makes sense to not waste your budget on pages that have zero SEO value.

10. Sitemaps – Did you know?

Some interesting facts about sitemaps that not everyone knows

  1. XML sitemaps must use UTF-8 encoding
  2. Sitemaps cannot include sessions IDs from URLs
  3. They can contain no more than 50,000 URLS
  4. They must be no bigger than 50MB
  5. You can use different sitemaps for different types of media
  6. You can use a sitemap index file to link different sitemap files together

11. You can check how Google ‘sees’ your site from a mobile perspective

With Google using a mobile-first index, it’s so important to make sure your pages perform as well on mobile as they do on a desktop. you can use Google Console’s Mobile Usability report to find specific pages with issues on mobile devices. You can also use their mobile-friendly test.

12. Keep loading time below 3 seconds

Longer loading times won’t always hurt your rankings, but when a Google Webmaster recommends keeping your loading times to 2 or 3 seconds, you best bet the SEO community listens.

13. Robots.txt directives won’t stop Google ranking your site (sort of)

There’s so much confusion over what “Disallow” does in your robots.txt file. This directive simply tells Google not to crawl the disallowed pages or folders. What it doesn’t do is actually stop Google from indexing these pages. An extract from the Google’s Search Console Help documentation says:

You should not use robots.txt as a means to hide your web pages from Google Search results. This is because other pages might point to your page, and your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.

14. Canonical works from a new domain to your main domain

This means you can keep the SEO value of an old domain name while using a newer domain.

15. Redirects and how long

It can take months for Google to recognise that a site, page or other file has moved. That’s why Google themselves recommend keeping 301 redirects live for at least 12 months. But personally, if it can be at all avoided, I’d recommend never removing the redirect.

16. You can control Google (the search box at least)

Google will sometimes include a search box with your listings. There’s no real way to make Google show this, but it’s powered by Google search and works to show users relevant content from within your site.

It’s possible to power this search box using your sites own search engine. You can even disable the search box (if you’re into that sort of thing_ by using the nositelinksearchbox meta tag.

17. You can stop Google translating your site in search

Using the ‘notranslate’ meta tag tells Google that it shouldn’t provide a translation for a specific page to be used in different language based versions of Google. This is a great option if you’re not entirely convinced Google has the ability to accurately translate your content.

18. Firebase app indexing can get your app into the Google

If you have an app that isn’t indexed yet, you can use Firebase app indexing to enable results from your app to appear when someone who has your app searches for related keywords.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *