Robots.txt is a file that is placed on your web server and tells search engine crawlers which pages they should or shouldn't crawl. You can use robots.txt to stop search engines from indexing specific pages of your website, to prevent duplicate content from being indexed, or even to request that archives not be crawled