Content that is duplicate appear on the net in more than one location. Where there are many locations in which identical content is located, it becomes hard for a search engine to decide which the more relevant version is to the given queries of the search. In order for web users to gain the best experience in searching, a search engine will rarely show duplicated content pieces. Thus, the search engine is forced to decide which is the best or the most original version.
In the field of search engine optimization, duplicate content is the term used for describing content appearing in more than 1 website.
The content that duplicates can be either substantial content parts across or within a domain or it can be closely similar or an exact copy. When many different pages essentially contain similar or the exact same content, Google and other search engines can stop displaying the site that copied original content in any relevant results of the search engine
Remember when Google Panda rolled out? So many sad accounts of how websites had been removed entirely from the index of Google. One reason is that Panda was released to control problems with duplicate content. This caused a huge drop in the rankings of some sites. Many sites went through a big decrease in ranks such as article directories, content farms and other sites that had duplicated content by the thousands. Panda really does frown on duplicate content. On the other hand there are types of duplicate content that do not provoke the wrath of Panda. For instance, copyright notices also known as long boilerplates should not be removed from content that they belong to. This way, you call the attention of readers to copyrighted material. When similar documents are published such as how SEO for big and small companies work, try and be as unique as you can. This way, you steer clear of any issues down the road.
Kinds of Duplicate Content
Duplicate content that is malicious refers to intentionally duplicated content in efforts to gain more traffic by manipulating results on the search engine. Search Spam is the term for this. To verify content uniqueness, there are a lot of tools available. In specific cases, Bing, Google and other search engines penalize individual and website offenders’ ranks in the search engine result page for ‘spammy’ duplicate content.
Duplicate content that is non-malicious could include page variations such as store items that show up in different URLs. It can also show up as optimized versions for printer friendliness, mobile devices or normal HTML. Issues of the same content also pop up when a website can be accessed using different sub domains such as without or with the ‘www’ or when a website fails to handle the URL trailing slashes in the right way. One popular form of duplicating content is syndicated content. If one webpage syndicates another webpage’s content, linking back to the original site is important.
Why Should You Be Concerned About It?
Duplicate Content is something you need to be concerned about. Right? Well, to understand the issue fully, it is necessary to look at this from the perspective of a search engine trying to provide users with the best possible experience rather than from the context of a website owner. From this standpoint, it will be easier to understand how duplicate content affects organic traffic and how it impacts a website.
As mentioned earlier, duplicate content is something that you will see in more than one location on the net. When there are two articles that are duplicates of each other, the search engine is unsure about which to rank higher or lower. The factors that come into play for this include the amount of content duplicated on a page and as a whole, on the site. Other factors include which content got seen first and what the relative strength of the site is.
If it is your site that happens to have the duplicated content, what happens is it won’t rank for that page. The page weight will be negligible. The site gets a point against it as a reliable source of unique, quality content and this will register.
What to Expect
As a general rule, it is probably a good idea not to expect high rankings in the search engine if your content is found somewhere else online. This is particularly true if the location it is in happens to be a site that is more trusted. Don’t expect any ranks if all you use is pages that are generated automatically and which add very little value. Of course, if the duplicated content is on your own site within the same web link, this is an exception to the rule.
Your Best Bet
If you want Google high rankings, or high rankings anywhere on the web, you should have original, good content to begin with and a lot of it. Start with asking yourself how much high quality versus low quality content do you have? Identify which content is high in quality. This way, the pages that get indexed really are the ones with better quality.
The best thing to do is to have one canonical, single content version on your site with unique, rich text content specifically written for that page. Google wants to reward informative, relevant, rich and remarkable content to its organic list. In the last few years, it has certainly raised the bar of quality.
For the best results, stick to original web content and publish this on just one of the pages on your websites. This will give you optimized results, literally. This is particularly true if you have a young or new site and are building it one page at a time. You will not only get more traffic to your website but also better rankings as well. Indeed, you can try and be creative and use repacked content again. However, it is always a good idea to use original content as a general rule of thumb.
Latest posts by Peter A. Liefer II (see all)
- If Content Is King, What Is Web Design? - August 28, 2017
- Web Design Trends that have Failed - August 7, 2017
- Seven Reasons You Need Landing Pages in Paid Search - July 31, 2017