When it comes to website management, controlling the pages indexed by search engines is crucial to maintaining a clean, effective and efficient web presence. The indexation of a page refers to the process by which search engines like Google include that page in their search results. While ensuring that key pages are indexed is vital for visibility and traffic, there are instances where you might want to prevent certain pages or folders from being indexed.
Not all pages on a website are meant to be found via search engines. Pages under construction, administrative pages, duplicate content, or private sections of a site should remain hidden from search engine results to maintain the quality and relevance of your indexed content. By preventing these pages from being indexed, you can protect sensitive information, improve user experience, and enhance your overall SEO strategy.
What does this redirection.io recipe?
The 'Control pages indexation' recipe is designed to help you control the indexation of specific pages or entire folders on your website without the need for developer intervention. By adding a customizable "X-Robots-Tag" response header, this recipe allows you to easily manage the visibility of your content in search engine results.
Understanding Indexation Status
When installing the 'Control pages indexation' recipe, you can choose from various indexation status options to define how search engines should treat the specified pages or folders. Here are some common indexation status options and their implications:
- noindex: This directive tells search engines not to include the page in their index. It is useful for pages that are not meant to be found via search engines, such as temporary pages or staging environments.
- nofollow: This directive instructs search engines not to follow the links on the page. It is useful for preventing search engines from crawling certain pages or sections of your site, thereby controlling the flow of link equity.
- noindex, nofollow: This combination of directives tells search engines not to index the page and not to follow the links on the page. It is useful for pages that should remain hidden from search engines and prevent the flow of link equity.
- noarchive: This directive prevents search engines from storing a cached copy of the page. It is useful for pages that contain sensitive or frequently updated content that should not be stored in search engine caches.
- nosnippet: This directive prevents search engines from displaying a snippet (preview) of the page in search results. It is useful for pages that contain sensitive information or are not meant to be previewed in search results. It helps control how your content is presented in search engines.
- noimageindex: This directive prevents search engines from indexing images on the page. It is useful for pages that contain images that should not be included in image search results.
- notranslate: This directive prevents search engines from offering translation services for the page. It is useful for pages that contain content that should not be translated or localized.
If multiple indexation directives are used (with several response headers, for example), then the most restrictive directive takes precedence. For example, if a page has both a noindex
and a nofollow
directive, search engines will follow the noindex
directive and not index the page. If you rather want to combine directives, you can use a single directive that includes all the desired options separated by commas (e.g., noindex, nofollow, noarchive
).
A complete documentation of the X-Robots-Tag directive can be found on the Google Developers website.
For more advanced SEO strategies and tools, be sure to explore our other "SEO" recipes designed to help you optimize your website effortlessly.
How to install this recipe on my website with redirection.io?
Installing this recipe on your website requires the following steps:
- Enter the Path of the Folder or Page for which you want to Control the Indexation Status: Begin by specifying the path of the folder or the individual page that you want to exclude from search engine indexing. This can be a specific URL or a directory path within your website.
-
Choose the Indexation Status:
Select the appropriate indexation status for your needs. Available options include:
- noindex: Prevents the page from being indexed.
- nofollow: Ensures that links on the page are not followed.
- noindex, nofollow: Combines both 'noindex' and 'nofollow' directives.
- noarchive: Stops search engines from storing a cached copy of the page.
- nosnippet: Prevents the display of a snippet or preview in search results.
- noimageindex: Prevents images on the page from being indexed.
- notranslate: Stops search engines from offering translation options for the page.
- Click on the "Install on My Website" Button: After specifying the path and selecting the desired indexation status, click the "Install on My Website" button to initiate the process. This action will create the necessary rule to apply the selected X-Robots-Tag header. The rule will remain in "draft" mode until you review and publish it.
- Review the Created Rule: Once the rule is created, review it to ensure it meets your requirements. You can edit the rule if necessary to fine-tune the settings according to your preferences.
- Publish It on Your Website: Finally, publish the rule on your website to implement the changes. This will activate the X-Robots-Tag header, to configure the indexation behavior of search engines.