Main menu

Pages

How to make a website optimized for SEO

Now that you know what SEO is and what are the main factors that Google takes into account when it comes to positioning a website, you still need to learn what you have to do so that your page has opportunities to position high in the SERPs .

How to make a website optimized for SEO

In this chapter we are going to talk about how to optimize the main positioning factors as well as the main SEO problems that arise when optimizing a website and their possible solutions.

We will divide the topics of this chapter into 4 large blocks:

Accessibility

Indexability

Content

Meta tags

1. Accessibility

The first step when optimizing the SEO of a website is to allow search engines access to our content. That is, you have to check if the web is visible in the eyes of search engines and, above all, how they are viewing the page.

For various reasons that we will explain later, it may be the case that search engines cannot read a website correctly, an essential requirement for positioning.

Aspects to take into account for good accessibility

  • Robots txt file
  • Meta tag robots
  • HTTP status codes
  • Sitemap
  • Web structure
  • JavaScript and CSS
  • Web speed

Robots txt file

The robots.txt file is used to prevent search engines from accessing and indexing certain parts of a website. It is very useful to prevent Google from showing the pages that we do not want in the search results. For example in WordPress, so that they do not access the administrator files, the robots.txt file would be like this:

Example

User agent: *

Disallow: / wp-admin

NOTE: You must be very careful not to block search engine access to your entire website without realizing it, as in this example:

Example

User agent: *

Disallow: /

We must verify that the robots.txt file is not blocking any important part of our website. We can do this by visiting the url www.ejemplo.com/robots.txt, or through Google Webmaster Tools in "Tracking"> "Tester robots.txt"

The robots.txt file can also be used to indicate where our sitemap is located by appending to the last line of the document.

So a complete robots.txt example for WordPress would look like this:

Example

User-agent: *

Disallow: / wp-admin

Sitemap: http: //www.example.com/sitemap.xml

If you want to go into more detail about this file, we recommend you visit the web with the information about the standard.

Meta Robot Tag

The meta tag "robots" is used to tell search engine robots whether or not they can index the page and whether they should follow the links it contains.

When analyzing a page you should check if there is any meta tag that is mistakenly blocking access to these robots. This is an example of what these tags would look like in HTML code:

Example

<meta name = ”robots” content = ”noindex, nofollow”>

On the other hand, meta tags are very useful to prevent Google from indexing pages that do not interest you, such as pages or filters, but to follow the links so that it continues to crawl our website. In this case the label would look like this:

Example

<meta name = ”robots” content = ”noindex, follow”>

We can check the meta tags by right clicking on the page and selecting "see page source code".

Or if we want to go a little further, with the Screaming Frog tool we can see at a glance which pages on the entire web have this tag implemented. You can see it in the "Directives" tab and in the "Meta Robots 1" field. Once you have located all the pages with these tags, you just have to delete them.


HTTP status codes

In the event that any URL returns a status code (404, 502, etc.), users and search engines will not be able to access that page. To identify these URLs, we recommend you also use Screaming Frog, because it quickly shows the status of all the URLs on your page.

IDEA: Every time you do a new search in Screaming Frog, export the result in a CSV. So you can collect them all in the same Excel later.

Sitemap

The sitemap is an XML file that contains a list of the site's pages along with some additional information, such as how often the page changes its contents, when was its last update, etc.

A small excerpt from a sitemap would be:

Example

<url>

<loc> http://www.eexample.com </loc>

<changefreq> daily </changefreq>

<priority> 1.0 </priority>

</url>

Important points that you should check regarding the Sitemap, which:

Follow the protocols, otherwise Google will not process it properly

Be uploaded to Google Webmaster Tools

Be up to date. When you update your website, make sure you have all the new pages in your sitemap

All the pages in the sitemap are being indexed by Google

In the event that the web does not have any sitemap, we must create one, following four steps:

Generate an Excel with all the pages that we want to be indexed, for this we will use the same Excel that we created when searching for the HTTP response codes

Create the sitemap. For this we recommend the Sitemap Generators tool (simple and very complete)

Compare the pages that are in your excel and those that are in the sitemap and remove from the excel those that we do not want to be indexed

Upload the sitemap through Google Webmaster Tools

Web structure

If the structure of a website is too deep, Google will find it more difficult to reach all the pages. So it is recommended that the structure is no more than 3 levels deep, (not counting the home) since the Google robot has a limited time to crawl a web, and the more levels it has to go through, the less time it will have to access to the deepest pages

That is why it is always better to create a horizontal web structure and not vertically.

Vertical Structure

Horizontal structure

Our advice is to make a diagram of the entire web in which you can easily see the levels it has, from the home page to the deepest page and be able to calculate how many clicks it takes to reach it.

Find out what level each page is on and if you have links pointing to it using Screaming Frog again.

JavaScript and CSS

Although in recent years Google has become more intelligent when it comes to reading this type of technology, we must be careful because JavaScript can hide part of our content and CSS can mess it up by showing it in another order than Google sees it.

There are two methods to know how Google reads a page:

Plugins

Command "cache:"

Plugins

Plugins like Web Developer or Disable-HTML help us see how a search engine "crawls" the web. To do this you have to open one of these tools and disable JavaScript. We do this because all drop-down menus, links, and texts must be readable by Google.

Then we deactivate the CSS, since we want to see the actual order of the content and the CSS can change this completely.

Command "cache:"

Another way to know how Google sees a website is through the command "cache:"

Enter "cache: www.myexample.com" in the search engine and click on "Text only version". Google will show you a photo where you can see how a website reads and when was the last time you accessed it.

Of course, for the "cache:" command to work correctly our pages must be previously indexed in Google's indexes.

After Google indexes a page for the first time, it determines how often it will revisit it for updates. This will depend on the authority and relevance of the domain that page belongs to and how often it is updated.

Either through a plugin or the "cache:" command, make sure you meet the following points:

You can see all the links in the menu.

All links on the web are clickable.

There is no text that is not visible with CSS and Javascript enabled.

The most important links are at the top.

Loading speed

The Google robot has a limited time when browsing our page, the less each page takes to load, the more pages it will get.

You should also bear in mind that a very slow page load can cause your bounce rate to skyrocket, which is why converting is a vital factor not only for positioning but also for a good user experience.

To see the loading speed of your website, we recommend Google Page Speed , there you can check what are the problems that slow down your site the most, as well as find the advice that Google offers you to tackle them. Focus on the ones with high and medium priority.

Indexability

Once the Google robot has accessed a page, the next step is to index it, these pages will be included in an index where they are ordered according to their content, their authority and their relevance to make it easier and faster for Google to access they.

How to check if Google has indexed my website correctly?

The first thing you have to do to know if Google has indexed your website correctly is to perform a search with the command "site:" , in this way Google will give you the approximate number of pages on our website that are indexed:

If you have Google Webmaster Tools linked to your website, you can also check the real number of indexed pages by going to Google Index> Indexing status

Knowing (more or less) the exact number of pages that your website has, this data will help you to compare the number of pages that Google has indexed with the number of real pages on your website. Three scenarios can happen:

  • The number in both cases is very similar. It means that everything is in order.
  • The number that appears in Google search is lower , which means that Google is not indexing many of the pages. This happens because it cannot access all the pages on the web. To solve this, check the accessibility part of this chapter.
  • The number that appears in Google search is higher , which means that your website has a duplicate content problem. Surely the reason why there are more indexed pages than actually exist on your website is that you have duplicate content or that Google is indexing pages that you do not want to be indexed.

Duplicate content

Having duplicate content means that for several URLs we have the same content. This is a very common problem, which is often unintentional and which can also have negative effects on Google positioning.

These are the main reasons for duplicate content:

  • “Canonicalization” of the page
  • Parameters in URL
  • Pagination

It is the most common reason for duplicate content and occurs when your home page has more than one URL:

Example

example.com

www.example.com

example.com/index.html

www.example.com/index.html

Each of the above leads to the same page with the same content, if Google is not told which one is correct, it will not know which one it has to position and it may position just the version that is not wanted.

Solution. There are 3 options:

  • Do a redirect on the server to ensure that there is only one page that is displayed to users.
  • Define which subdomain we want to be the main one ("www" or "non-www") in Google Webmaster Tools. How to define the main subdomain.
  • Add a "rel = canonical" tag in each version that points to those that are considered correct.

Parameters in URL

There are many types of parameters, especially in e-commerce: product filters (color, size, score, etc.), ordering (lower price, by relevance, higher price, in a grid, etc.) and user sessions. The problem is that many of these parameters do not change the content of the page and that generates many URLs for the same content.

www.example.com/pencils?color=black&price-from=5&price-to=10

In this example we find three parameters: color, minimum price and maximum price.

Solution

Add a "rel = canonical" tag to the original page, thus avoiding any kind of confusion from Google with the original page.

Another possible solution is to indicate through Google Webmaster Tools> Tracking> URL parameters which parameters Google should ignore when indexing the pages of a web.

Pagination

When an article, product list, or tag and category pages have more than one page, duplicate content issues can occur even though the pages have different content, because they are all focused on the same topic. This is a huge problem on e-commerce pages where there are hundreds of articles in the same category.

Solution

Currently the rel = next and rel = prev tags allow search engines to know which pages belong to the same category / publication and thus it is possible to focus all the positioning potential on the first page.

How to use the NEXT and PREV parameters

1. Add the rel = next tag in the part of the code to the first page:

link rel = ”next” href = ”http://www.eexample.com/page-2.html” />

2. Add the rel = next and rel = prev tags to all the pages except the first and last

link rel = ”prev” href = ”http://www.eexample.com/page-1.html” />

link rel = ”next” href = ”http://www.eexample.com/page-3.html” />

3. Add the rel = prev tag to the last page

link rel = ”prev” href = ”http://www.eexample.com/page-4.html” />

Another solution is to find the pagination parameter in the URL and enter it in Google Webmaster Tools so that it is not indexed.

Cannibalization

Keyword cannibalization occurs when there are several pages on a website that compete for the same keywords. This confuses the search engine by not knowing which is the most relevant for that keyword.

This problem is very common in e-commerce, because having several versions of the same product "attack" with all of them the same keywords. For example, if a book is sold in a soft cover, hard cover and digital version, there will be 3 pages with practically the same content.

Solution

Create a main page of the product, from where you can access the pages of the different formats, in which we will include a canonical tag that points to said main page. The optimum will always be to focus each keyword on a single page to avoid any cannibalization problem.

3. Contents

Since in recent years it has become quite clear that content is king for Google. Let's offer him a good throne then.

Content is the most important part of a website and even if it is well optimized at the SEO level, if it is not relevant with respect to the searches made by users, it will never appear in the first positions.

To do a good analysis of the content of our website you have a few tools at your disposal, but in the end the most useful thing is to use the page with Javascript and CSS disabled as we explained above. This way you will see what content Google is really reading and in what order it is arranged.

When analyzing the content of the pages you should ask yourself several questions that will guide you through the process:

  • Does the page have enough content? There is no standard measure of how much "enough" is, but it should be at least 300 words.
  • Is the content relevant? It should be useful to the reader, just ask yourself if you would read that. Be sincere.
  • Do you have important keywords in the first paragraphs? In addition to these, we must use related terms because Google is very effective at relating terms.

  • Do you have keyword stuffing ?  If the content of the page "sins" of excess keywords, Google will not be happy. There is no exact number that defines a perfect keyword density , but Google advises to be as natural as possible.
  • Do you have spelling mistakes?
  • Is it easy to read? If reading is not tedious, it will be fine. The paragraphs should not be very long, the font should not be too small and it is recommended that there be images or videos that reinforce the text. Remember to always think for which audience you are writing.
  • Can Google read the text on the page? We have to avoid that the text is inside Flash, images or Javascript. We will check this by viewing the text-only version of our page, using the cache: www command in Google. example.com and selecting this version.
  • Is the content well distributed? It has its corresponding H1, H2 etc. labels, the images are well laid out etc.
  • Is it linkable? If we do not provide the user with how to share it, it is very likely that they will not do so. Include buttons to share on social networks in visible places on the page that do not obstruct the display of content, be it a video, a photo or text.
  • Is actual? the more up-to-date your content is, the higher the frequency of Google's crawling on your website and the better the user experience.

advice

You can create an excel with all the pages, their texts and the keywords that you want to appear in them, in this way it will be easier for you to see where you should reduce or increase the number of keywords on each page.

4. Meta tags

The meta tags or meta tags are used to convey information to search engines what the page is about when they have to sort and show your results. These are the most important labels that we must take into account:

Title

The title tag is the most important element within meta-tags. It is the first thing that appears in the results in Google.

When optimizing the title, keep in mind that:

  • The tag must be in the <head> </head> section of the code.
  • Each page must have a unique title.
  • It should not exceed 70 characters, otherwise it will appear cut off.
  • It must be descriptive with respect to the content of the page.
  • It must contain the keyword for which we are optimizing the page.

We must never abuse the keywords in the title , this will make users distrust and Google think that we are trying to deceive them.

Another aspect to take into account is where to put the "brand", that is: the name of the web, usually it is put at the end to give more importance to the keywords, separating these from the name of the web with a hyphen or a vertical bar.

Meta-description

Although it is not a critical factor in the positioning of a website, it considerably affects the click-through rate in search results.

For the meta-description we will follow the same principles as with the title, only that its length should not exceed 155 characters. For both titles and meta-descriptions we must avoid duplication, we can check this in Google Webmaster Tools> Lookup of the search> HTML improvements.

Meta Keywords

At the time, meta keywords were a very important ranking factor, but Google discovered how easy it is to manipulate search results so it eliminated it as a ranking factor.

Labels H1, H2, H3 ...

The H1, H2, etc. They are very important to have a good information structure and a good user experience, since they define the hierarchy of the content, something that will improve SEO. We must give importance to H1 because it is usually in the highest part of the content and the higher a keyword is, the more importance Google will give it.

Tag "alt" in the image

The "alt" tag in the images is added directly in the image code itself.

Example

<img src = ”http://www.example.com/example.jpg” alt = ”keyword cool” />

This tag has to be descriptive with respect to the image and content of said image , since it is what Google reads when crawling it and one of the factors that it uses to position it in Google Images.

conclusion

You already know how to make a page optimized for SEO and that there are many factors to optimize if you want to appear in the best positions of the search results.  

Comments