E-Commerce SEO: 7 ways to reduce duplicate content

Image for post
Image for post

Duplicated content on e-commerce sites is in the nature of things. Because every e-commerce platform, no matter how SEO-friendly these are or the manufacturers advertise, produces some form of duplicate content and often prevents top rankings in search engines. Examples include product lists in categories or wish lists, translated products or simply poor product data — created through automated product import or lack of planning for the manual creation of item data.

No punishment, but damping

Contrary to the general opinion of many experts, there is no penalty for duplicate content. This has also confirmed Google in his own blog multiple times. So far the good news. The bad news? Even if Google does not punish sites with duplicate content, an algorithm is likely to worsen — to cushion. The insidious thing is that it is barely visible immediately. The terms are Relevance and Authority .

If there is duplicate content for the same keyword topic, the link authority splits it into two or more pages. This then worsens both the relevance and the authority as essential parameters in the algorithm of Google. Since more than one page is targeted to the same keyword topic in such a case, it’s less relevant to search engines because it’s harder to determine which page has priority. And since several pages are internally linked to the same keyword topic, the links, which could all reinforce a single page, instead support multiple pages and do not bring any advantage. Damping is thus a weakening of the signals that a website sends to search engine algorithms, which affects the website’s ability to rank better.

But why is not that a punishment? In Google’s world, a punishment is imposed manually by a real human if certain pages or an entire web site meet a predefined definition of injury. Attenuation is algorithmic in nature and is usually more difficult to diagnose because Google does not alert you to algorithmic issues through Google Search Console the way it notifies you of a manual penalty.

Unwanted SEO effects

The problem with getting rid of duplicate content is that simply removing pages can also produce other unwanted effects. Such as:

  • Customer Experience: In some cases, your buyers need to see these pages. Assorted product overviews, wishlists pages, print pages and much more are often automatically duplicated. Removing such pages would potentially affect the customer experience and possibly sales.
  • Link Authority: Each indexed URL has at least one small link authority. Deleting pages would therefore be a waste in organic search.

The goal is therefore to identify exactly what is to be achieved. Do you want to remove the page for search engines, but keep it for shop visitors? Do you need to eliminate the page for visitors and search engines? Is it more important to get rid of the site right away (for legal or other reasons) or should you try to take advantage of other measures in search engines?

The table below should help you to make informed decisions.

7 ways to remove duplicate content

Image for post
Image for post

The first option on the list, 301- Redirect , is probably the SEO star. Whenever possible, use a 301 redirect to remove duplicate content. Because it is the only possibility, which completely guarantees the combination of forwarding of the bot of the search engines but also of the Web visitor. The 301 redirect is not just a direct command to search engines, it actually passes the link authority to the new URL.

If the administrator, for whatever reason, is to cross the page or the customers should continue to see the page, so you can try Cannonical Tags . However, this usually has to be implemented by eCommerce website developers. A Canonical tag is an indication in the source code of a web page. It references a standard resource — a canonical URL — for pages with the same or nearly the same content. Google receives the information in order not to negatively evaluate the pages with duplicate content . However, there is a residual risk that Google will not recognize these pages.

Number three on the list are 302 redirects . These are related to the almighty 301 redirects . As the name implies, it is a temporary detour. Here there is no deindexing of the old page, as Google assumes that the old page will be visible again.

Removing content can be harmful

The remaining four options de-index content only . They do not redirect the bot or shop visitors and they do not pass link authorization to another page. These options should only be used when there is no other option (eg through 301 redirect or content cleanup ). Because of these measures also disappears the link authority of the sides. Because Link Authority is the most important currency in the search engine universe.

The improvement of the link authority is achieved by a variety of measures. Often it is a strange combination of luck and constant optimization. Often many disciplines of marketing such as press work, social media marketing or presence in blogs or forums play an essential role. If you remove a page, this investment is lost. Therefore, there is even a possibility in the Google Search Console totemporarily exclude URL’s. However, you should handle this feature with caution, as if misused it could, in the worst case, take the entire page out of the index.

The only way to make a page really invisible to man and machine is to remove the page from the servers, forcing the URL to return a “file not found” 404 error or redirecting it to a new URL ,

Meta-Robots noindex tags and Robots.txt disallow commands should be the last options. This for a combination of reasons. First, they waste Link Authority. Noindex and robots.txt reject commands that prevent search engines from indexing specific pages in various ways. If a page is already indexed, it has some link authoritySmall as she may be. This should not be wasted lightly. Second, search engines used to strictly follow these commands. Today this is no longer the case, especially for content that has already been indexed. It can often take months for such entries to take effect. If you want to get a page from the index, you should choose other methods.

Manual cleanup

Last but not least, of course, the manual cleanup remains. Tools like Google Search Console show very good duplicate content . Often it is because of poor data collection. Duplicate meta-title or meta-description often results from poor imports when creating products in the shop system. But also the copying of articles to make the installation of similar articles faster, are often stumbling blocks. When translating articles, you should also pay attention to the most important fields to use unique texts depending on the language.

Shop systems have a far greater complexity when it comes to SEO measures . Duplicate content is often created in the background automatically. The above measures can help reduce duplicate content . With appropriate optimization measures we are happy to help.

Written by

Christina Cheeseman is a Technology Strategist at Elitech Systems. She enjoys writing about Technology, marketing & industry trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store