User documentation
  1. What is redirection.io?
  2. Starter's guide
  3. What are organizations and projects?
  4. Invite new collaborators
  5. User account and preferences
  6. Using traffic logs
  7. Create a rule
  8. Triggers and markers reference
  9. Actions reference
  10. How to bulk-import or export redirection rules?
  11. Managing instances
  12. Project notifications
  13. Project segmentation
  14. How much does it cost?
  15. Can I use redirection.io for free?
  16. About us

Developer documentation
  1. TL;DR; Fast track
  2. Installation of the agent
  3. Upgrading the agent
  4. Agent configuration reference
  5. Available integrations
  6. The agent as a reverse proxy
  7. nginx module
  8. Apache module
  9. platform.sh integration
  10. Cloudflare Workers integration
  11. Fastly Compute@Edge integration
  12. Vercel Middleware Integration
  13. Using redirection.io with Docker
  14. How fast is it?
  15. Public API

Crawler
  1. What is the redirection.io crawler?
  2. Start a crawl
  3. Schedule a crawl
  4. Analyzing the results of a crawl
  5. The crawls list
  6. Crawl credits and pricing
  7. Crawl Errors
  8. Crawler metrics reference
  9. Crawler columns reference

Managed instances
  1. What are managed instances?
  2. Add a domain to your project
  3. Managed instances limits and quota
  4. Frequently asked questions

Knowledge base
  1. Create your first redirections
  2. redirection.io rules cookbook
  3. Setting up a redirection server on Azure Cloud
  4. Structured data and Rich Snippets
  5. What is a URL redirection?
  6. Why use URL redirections and how to setup

Legacy versions
  1. Agent 1.x configuration reference
  2. Legacy integrations
  3. Legacy Cloudflare Workers integration

Changelogs
  1. redirectionio-agent
  2. libnginx-mod-redirectionio
  3. libapache2-mod-redirectionio

Schedule a crawl

Scheduling a crawl allows to run it on a regular basis. The results of the crawl will be available under the "Crawls" tab, as if it add been manually triggered.

Running crawls regularly helps you to keep track of the evolution of your website and be informed of regressions, new issues, etc.

Crawl - Setup a scheduled crawl

All the settings of the crawl can be configured as for a manual crawl. But you can also choose the frequency of the crawl, which can be set to:

  • daily
  • weekly
  • or monthly

Once a crawl is finished, you can analyze the results.

You can get notified when a crawl is finished by setting up a notification channel. In the case of weekly or monthly crawls, this can be useful to be informed of the results directly in your mailbox and keep an eye on the evolution of your website.

This page has been updated on Aug 28, 2025
Can't find your answer?