When it comes to website management, controlling the pages indexed by search engines is crucial to maintain a clean and efficient web presence. The indexation of a page refers to the process by which search engines like Google include that page in their search results. While ensuring that key pages are indexed is vital for visibility and traffic, there are instances where you might want to prevent certain pages or folders from being indexed.
Not all pages on a website are meant to be found via search engines. Pages under construction, administrative pages, duplicate content, or private sections of a site should remain hidden from search engine results to maintain the quality and relevance of your indexed content. By preventing these pages from being indexed, you can protect sensitive information, improve user experience, and enhance your overall SEO strategy.
What does this redirection.io recipe?
The "Control pages indexation" recipe is designed to help you control the indexation of specific pages or entire folders on your website without the need for developer intervention. By adding a customizable "X-Robots-Tag" response header, this recipe allows you to easily manage the visibility of your content in search engine results.
Understanding indexation status
When installing this recipe, you can choose from various indexation status options to define how search engines should treat the specified pages or folders. Here are some common indexation status options and their implications:
- noindex: This directive tells search engines not to include the page in their index. It is useful for pages that are not meant to be found via search engines, such as temporary pages or staging environments.
- nofollow: This directive instructs search engines not to follow the links on the page. It is useful for preventing search engines from crawling certain pages or sections of your site, thereby controlling the flow of link equity.
- noindex, nofollow: This combination of directives tells search engines not to index the page and not to follow the links on the page. It is useful for pages that should remain hidden from search engines and prevent the flow of link equity.
- noarchive: This directive prevents search engines from storing a cached copy of the page. It is useful for pages that contain sensitive or frequently updated content that should not be stored in search engine caches.
- nosnippet: This directive prevents search engines from displaying a snippet (preview) of the page in search results. It is useful for pages that contain sensitive information or are not meant to be previewed in search results. It helps control how your content is presented in search engines.
- noimageindex: This directive prevents search engines from indexing images on the page. It is useful for pages that contain images that should not be included in image search results.
- notranslate: This directive prevents search engines from offering translation services for the page. It is useful for pages that contain content that should not be translated or localized.
If multiple indexation directives are used (with several response headers, for example), then the most restrictive directive takes precedence. For example, if a page has both a noindex
and a nofollow
directive, search engines will follow the noindex
directive and not index the page. If you rather want to combine directives, you can use a single directive that includes all the desired options separated by commas (e.g., noindex, nofollow, noarchive
).
A complete documentation of the X-Robots-Tag
directive can be found on the Google Developers website.
For more advanced SEO strategies and tools, be sure to explore our other "SEO" recipes designed to help you optimize your website effortlessly.
How to install this recipe on my website with redirection.io?
Installing this recipe on your website requires the following steps:
- Enter the path of the folder or page for which you want to control the indexation status: enter the path of the folder or the individual page that you want to exclude from search engine indexing. This can be a specific URL or a directory path within your website.
- Choose the indexation status: select the appropriate indexation status for your needs. Available options include:
- noindex: Prevents the page from being indexed.
- nofollow: Ensures that links on the page are not followed.
- noindex, nofollow: Combines both 'noindex' and 'nofollow' directives.
- noarchive: Stops search engines from storing a cached copy of the page.
- nosnippet: Prevents the display of a snippet or preview in search results.
- noimageindex: Prevents images on the page from being indexed.
- notranslate: Stops search engines from offering translation options for the page.
- Click on "Install on My Website": this will create a new rule in "draft" mode to apply the selected
X-Robots-Tag header
. - Review the created rule: edit the rule if necessary to fine-tune the settings according to your preferences.
- Publish the rules: publish the rule to make the changes live.