You sit down to read the new book from your favorite author. You open the book and start reading. After a while, you come across two pages that are identical. You’re a bit confused. How did this happen? Was this a misprint? Did the assembly line accidentally put two duplicate pages in the book?
Duplicate content creates the same amount of confusion for users and search engines as finding a duplicate page in a book. Search engines aim to present users with the most relevant information they can find. If your website has duplicate content, possibly multiple pages with the same content, search engines have no idea which version of a webpage to show users. As a result, the search engine will penalize websites that contain duplicate content. Here are six tips to prevent your website from containing duplicate content.
Robots.txt files tell search engines which pages on your site you don’t want crawled and helps direct web traffic. Think of a treasure map: The map tells you where the treasure is located but also where it is not. You could still explore these areas, but you won’t find what you’re looking for. If you have duplicate content such as printer-friendly versions of webpages, place these in the robots.txt file and then resubmit the robots.txt file to the search engines. Search engine crawlers won’t go on these pages anymore, and users will be directed to the most relevant pages on your website.
Search engines may consider pages that contain similar content to be duplicates. If you have pages on your site with similar content, consider merging them or adding content that highlights each page’s unique aspects. For example, your medical practice’s website has location pages that contain the same information for each of its four clinics. You could either merge these pages into one locations page or highlight the different services at each site to make the pages unique.
When you update a web page, you might remove an older page that contained similar information or move it to a new URL to avoid confusing duplicate content. You want to make sure, however, that you don’t lose website visitors in the process.
301 redirects ensure you don’t lose users as content is moved from one URL to another. They do this by automatically taking users to a webpage’s new URL if they type an old one into their browser. In this way, a 301 redirect is like the US Postal Service forwarding your mail after a move. It forwards users to the correct web page after you’ve removed an old web page or moved content to a new location on your site. In addition, 301 redirects let search engines know that previously indexed content, which may have reached a prominent place in search engine results, now resides in a new place.
You may be most familiar with websites syndicating content as a source of duplication. Blogs and other long-form content, however, are not the only content on the internet. If you have an e-commerce website, your product pages are also content. Reusing the same manufacturer descriptions or copying descriptions from other e-commerce websites can cause search engines to penalize your site.
A website boilerplate is the copyrighted text at the bottom of a webpage. Google and other search engines can flag this as duplicate content. Consider consolidating this content and placing a summary on the web page with a link to all your website’s copyright information on a different page.
Internal links are links that go from one webpage to another on the same website. They allow users to navigate your website with ease, help search engines establish page hierarchy, and help spread ranking power around your website. Formatting the visible URL for internal links the same way will help search engines efficiently identify relevant information to a user’s query.
Want to learn more tips for your website? Check out our blog and our free ebook library!