What is robots.txt?
The robots. txt file, also known as the robots exclusion protocol it is a text file that tells web robots (most often search engines) which pages on your site to crawl. It also tells web robots which pages not to crawl.
In practice, robots.txt files indicate whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents.
User-agent: [user-agent name]Disallow: [URL string not to be crawled]