Robots.txt: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
 
imported>Materialscientist
m Reverted edits by ~2025-34077-55 (talk) (HG) (3.4.13)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Short description|Filename used to indicate portions for web crawling}}
{{Short description|Filename used to indicate portions for web crawling}}
{{Lowercase}}
{{Lowercase title}}
{{Selfref|For Wikipedia's robots.txt file, see https://en.wikipedia.org/robots.txt.}}
{{Pp-pc | small=yes}}
{{Pp-pc1}}
{{Pp-pc|small=yes}}
{{Infobox technology standard
{{Infobox technology standard
| title            = robots.txt
| title            = robots.txt
Line 38: Line 36:
| website          = {{URL|https://robotstxt.org}}, {{URL|https://datatracker.ietf.org/doc/html/rfc9309|RFC 9309}}
| website          = {{URL|https://robotstxt.org}}, {{URL|https://datatracker.ietf.org/doc/html/rfc9309|RFC 9309}}
}}
}}
'''robots.txt''' is the [[filename]] used for implementing the '''Robots Exclusion Protocol''', a standard used by [[website]]s to indicate to visiting [[web crawler]]s and other [[Internet bot|web robots]] which portions of the website they are allowed to visit.
'''robots.txt''' is the [[filename]] used for implementing the '''Robots Exclusion Protocol''', a standard used by [[website]]s to indicate to visiting [[web crawler]]s and other [[Internet bot|web robots]] which portions of the website they are allowed to visit.


Line 44: Line 41:


The "robots.txt" file can be used in conjunction with [[sitemaps]], another robot inclusion standard for websites.
The "robots.txt" file can be used in conjunction with [[sitemaps]], another robot inclusion standard for websites.
Search engines use crawlers (bots) to index website content. Without guidance, these bots may crawl unnecessary or irrelevant pages. The Robots.txt file helps control what search engines should or should not index.<ref>{{cite web |url=https://www.techxgurus.com/2025/02/what-is-robotstxt-file-its-importance.html|title=What is RobotsTxt File: Its Importance|work=TechxGurus |access-date=2025-09-16}}</ref>


==History==
==History==
Line 53: Line 52:


==Standard==
==Standard==
When a site owner wishes to give instructions to web robots they place a text file called {{mono|robots.txt}} in the root of the web site hierarchy (e.g. {{mono|<nowiki>https://www.example.com/robots.txt</nowiki>}}). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the [[website]]. If this file does not exist, web robots assume that the website owner does not wish to place any limitations on crawling the entire site.
A site owner wishing to give instructions to web robots places a text file called {{mono|robots.txt}} in the root of the web site hierarchy (e.g. {{mono|<nowiki>https://www.example.com/robots.txt</nowiki>}}). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the [[website]]. If this file does not exist, web robots assume that the website owner does not wish to place any limitations on crawling the entire site.


A robots.txt file contains instructions for bots indicating which web pages they can and cannot access. Robots.txt files are particularly important for web crawlers from search engines such as Google.
A robots.txt file contains instructions for bots indicating which web pages they can and cannot access. Robots.txt files are particularly important for web crawlers from search engines such as Google.
Line 59: Line 58:
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operates on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.<ref>{{cite web |url=https://www.youtube.com/watch?v=KBdEwpRQRD0#t=196s |title=Uncrawled URLs in search results |publisher=YouTube |date=Oct 5, 2009 |access-date=2013-12-29 |archive-url=https://web.archive.org/web/20140106222500/http://www.youtube.com/watch?v=KBdEwpRQRD0#t=196s |archive-date=2014-01-06 |url-status=live }}</ref>
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operates on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.<ref>{{cite web |url=https://www.youtube.com/watch?v=KBdEwpRQRD0#t=196s |title=Uncrawled URLs in search results |publisher=YouTube |date=Oct 5, 2009 |access-date=2013-12-29 |archive-url=https://web.archive.org/web/20140106222500/http://www.youtube.com/watch?v=KBdEwpRQRD0#t=196s |archive-date=2014-01-06 |url-status=live }}</ref>


A robots.txt file covers one [[Same origin policy|origin]]. For websites with multiple subdomains, each subdomain must have its own robots.txt file. If {{mono|example.com}} had a robots.txt file but {{mono|a.example.com}} did not, the rules that would apply for {{mono|example.com}} would not apply to {{mono|a.example.com}}. In addition, each protocol and port needs its own robots.txt file; {{mono|<nowiki>http://example.com/robots.txt</nowiki>}} does not apply to pages under {{mono|<nowiki>http://example.com:8080/</nowiki>}} or {{mono|<nowiki>https://example.com/</nowiki>}}.
A robots.txt file covers one [[Same origin policy|origin]]. For websites with multiple [[subdomain]]s, each subdomain must have its own robots.txt file. If {{mono|example.com}} had a robots.txt file but {{mono|foo.example.com}} did not, the rules that would apply for {{mono|example.com}} would not apply to {{mono|foo.example.com}}. In addition, each [[List of URI schemes|URI scheme]] and [[Port (computer networking)|port]] needs its own robots.txt file; {{mono|<nowiki>http://example.com/robots.txt</nowiki>}} does not apply to pages under {{mono|<nowiki>http://example.com:8080/</nowiki>}} or {{mono|<nowiki>https://example.com/</nowiki>}}.


==Compliance==
==Compliance==
The robots.txt protocol is widely complied with by bot operators.<ref name="Verge"/> <!--It entered the court as part of ''[[eBay v. Bidder's Edge]]'',<ref name=":1">{{Cite news |last= |first= |date=2000-07-31 |title=EBay Fights Spiders on the Web |url=https://www.wired.com/2000/07/ebay-fights-spiders-on-the-web/ |access-date=2024-08-02 |work=[[Wired (magazine)|Wired]] |language=en-US |issn=1059-1028}}</ref> where eBay attempted to block a bot, and the company operating the crawler was ordered to stop crawling eBay's servers using any automatic means, by [[Injunction|legal injunction]] the basis of [[Trespass to chattels|trespassing]].<ref name="case">{{cite court|litigants=eBay v. Bidder's Edge|vol=100|reporter=F. Supp. 2d|opinion=1058|pinpoint=|court=[[N.D. Cal.]]|date=2000|quote=|url=http://www.cand.uscourts.gov/cand/tentrule.nsf/3979517dd11390ce8825690a007c1b9e/d0fc1406324de0cd882568e90081ebf4/$FILE/Ebay.pdf|archive-url=https://web.archive.org/web/20000817173849/http://www.cand.uscourts.gov/cand/tentrule.nsf/3979517dd11390ce8825690a007c1b9e/d0fc1406324de0cd882568e90081ebf4/$FILE/Ebay.pdf|url-status=dead|accessdate=2000-08-17}}</ref><ref>{{Cite web |last=Hoffmann |first=Jay |date=2020-09-15 |title=Chapter 4: Search |url=https://thehistoryoftheweb.com/book/search/ |access-date=2024-08-02 |website=The History of the Web |language=en-US}}</ref><ref name=":1" />-->
The robots.txt protocol is widely complied with by bot operators.<ref name="Verge"/>
 
The robots.txt played a role in the 1999 legal case of ''[[eBay v. Bidder's Edge]]'',<ref name=":1">{{Cite news |last= |first= |date=2000-07-31 |title=EBay Fights Spiders on the Web |url=https://www.wired.com/2000/07/ebay-fights-spiders-on-the-web/ |access-date=2024-08-02 |work=[[Wired (magazine)|Wired]] |language=en-US |issn=1059-1028}}</ref> where eBay attempted to block a bot that did not comply with robots.txt, and in May 2000 a court ordered the company operating the bot to stop crawling eBay's servers using any automatic means, by [[Injunction|legal injunction]] on the basis of [[Trespass to chattels|trespassing]].<ref name="case">{{cite court|litigants=eBay v. Bidder's Edge|vol=100|reporter=F. Supp. 2d|opinion=1058|pinpoint=|court=[[N.D. Cal.]]|date=2000|quote=|url=http://www.cand.uscourts.gov/cand/tentrule.nsf/3979517dd11390ce8825690a007c1b9e/d0fc1406324de0cd882568e90081ebf4/$FILE/Ebay.pdf|archive-url=https://web.archive.org/web/20000817173849/http://www.cand.uscourts.gov/cand/tentrule.nsf/3979517dd11390ce8825690a007c1b9e/d0fc1406324de0cd882568e90081ebf4/$FILE/Ebay.pdf|url-status=dead|accessdate=2000-08-17}}</ref><ref>{{Cite web |last=Hoffmann |first=Jay |date=2020-09-15 |title=Chapter 4: Search |url=https://thehistoryoftheweb.com/book/search/ |access-date=2024-08-02 |website=The History of the Web |language=en-US}}</ref><ref name=":1" /> Bidder's Edge appealed the ruling, but agreed in March 2001 to drop the appeal, pay an undisclosed amount to eBay, and stop accessing eBay's auction information.<ref>{{cite web |last=Berry |first=Jahna |date=July 24, 2001 |title=Robots in the Hen House |url=http://www.law.com/regionals/ca/stories/edt0723_ip_robots.shtml |archive-url=https://web.archive.org/web/20110608004415/http://www.law.com/regionals/ca/stories/edt0723_ip_robots.shtml |archive-date=2011-06-08 |accessdate=June 20, 2015 |work=law.com}}</ref><ref>{{cite web |title=EBay, Bidder's Edge Settle Suits on Web Access |url=https://www.latimes.com/archives/la-xpm-2001-mar-02-fi-32241-story.html |access-date=June 20, 2015 |work=latimes}}</ref>
 
In 2007 ''Healthcare Advocates v. Harding'', a company was sued for accessing protected web pages archived via [[Wayback Machine|The Wayback Machine]], despite robots.txt rules denying those pages from the archive. A [[Pennsylvania]] court ruled "in this situation, the robots.txt file qualifies as a technological measure" under the [[Digital Millennium Copyright Act|DMCA]]. Due to a malfunction at Internet Archive, Harding could temporarly access these pages from the archive and thus the court found "the Harding firm did not circumvent the protective measure".<ref>{{Cite news |date=Aug 2, 2007 |title=Use of web archive was not hacking, says US court |url=https://www.theregister.com/2007/08/02/healthcare_advocates_suit/ |access-date=Oct 22, 2025 |work=[[The Register]]}}</ref><ref>{{Cite web |date=July 20, 2007 |title=Memorandum - Healthcare Advocates v Harding at all |url=https://www.govinfo.gov/content/pkg/USCOURTS-paed-2_05-cv-03524/pdf/USCOURTS-paed-2_05-cv-03524-0.pdf |access-date=Oct 22, 2025 |website=govinfo.gov}}</ref><ref>{{Cite web |date=July 20, 2007 |title=Healthcare Advocates, Inc. v. Harding, Earley, Follmer & Frailey |url=https://www.courtlistener.com/opinion/1614840/healthcare-advocates-inc-v-harding-earley-follmer-frailey |access-date=2025-10-23 |website=www.courtlistener.com}}</ref>
 
In 2013 ''[[Associated Press v. Meltwater U.S. Holdings, Inc.]]'' the Associated Press sued Meltwater for [[copyright infringement]] and misappropriation over copying of AP news items. Meltwater claimed [[fair use]] and they did not require a license, because the content was freely available and not protected by robots.txt. The court decided in March 2013 that "Meltwater’s copying is not protected by the fair use doctrine", mentioning among several factors that "failure […] to employ the robots.txt protocol did not give Meltwater […] license to copy and publish AP content".<ref>{{Cite web |date=March 21, 2013 |title=Associated Press v. Meltwater U.S. Holdings, Inc. |url=https://www.courtlistener.com/opinion/8723643/associated-press-v-meltwater-us-holdings-inc/ |access-date=2025-10-23 |website=www.courtlistener.com}}</ref>


===Search engines===
===Search engines===
Some major search [[Facebook Paper|engines]] following this standard include Ask,<ref name="ask-webmasters">{{cite web |title=About Ask.com: Webmasters |url=http://about.ask.com/docs/about/webmasters.shtml |website=About.ask.com |access-date=16 February 2013 |archive-date=27 January 2013 |archive-url=https://web.archive.org/web/20130127134025/http://about.ask.com/docs/about/webmasters.shtml |url-status=live }}</ref> AOL,<ref name="about-aol-search">{{cite web |title=About AOL Search |url=http://search.aol.com/aol/about |website=Search.aol.com |access-date=16 February 2013 |archive-date=13 December 2012 |archive-url=https://web.archive.org/web/20121213134546/http://search.aol.com/aol/about |url-status=dead }}</ref> Baidu,<ref name="baidu-spider">{{cite web |title=Baiduspider |url=http://www.baidu.com/search/spider_english.html |website=Baidu.com |access-date=16 February 2013 |archive-date=6 August 2013 |archive-url=https://web.archive.org/web/20130806131031/http://www.baidu.com/search/spider_english.html |url-status=live }}</ref> Bing,<ref name="bing-blog-robots">{{cite web|url=https://blogs.bing.com/webmaster/2008/06/03/robots-exclusion-protocol-joining-together-to-provide-better-documentation/|title=Robots Exclusion Protocol: joining together to provide better documentation|website=Blogs.bing.com|date=3 June 2008 |archive-url=https://web.archive.org/web/20140818025412/http://blogs.bing.com/webmaster/2008/06/03/robots-exclusion-protocol-joining-together-to-provide-better-documentation/|archive-date=2014-08-18|url-status=live|access-date=16 February 2013}}</ref>  DuckDuckGo,<ref name="duckduckgo-bot">{{cite web|url=https://duckduckgo.com/duckduckbot|website=DuckDuckGo.com|title=DuckDuckGo Bot|access-date=25 April 2017|archive-date=16 February 2017|archive-url=https://web.archive.org/web/20170216043103/https://duckduckgo.com/duckduckbot|url-status=live}}</ref> Kagi,<ref name="KagiBot">{{cite web|url=https://kagi.com/bot|website=Kagi Search|title=Kagi Search KagiBot|access-date=20 November 2024|archive-date=12 April 2024|archive-url=https://web.archive.org/web/20240412192855/https://kagi.com/bot|url-status=live}}</ref> Google,<ref name="google-webmasters-spec">{{cite web |url=https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt |title=Webmasters: Robots.txt Specifications |work=Google Developers |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130115214137/https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt |archive-date=2013-01-15 |url-status=live }}</ref> Yahoo!,<ref name="yahoo-search-is-bing">{{cite web |url=http://help.yahoo.com/kb/index?page=content&y=PROD_SRCH&locale=en_US&id=SLN2217&impressions=true |title=Submitting your website to Yahoo! Search |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130121035801/http://help.yahoo.com/kb/index?page=content&y=PROD_SRCH&locale=en_US&id=SLN2217&impressions=true |archive-date=2013-01-21 |url-status=live }}</ref> and Yandex.<ref name="yandex-robots">{{cite web |url=http://help.yandex.com/webmaster/?id=1113851 |title=Using robots.txt |website=Help.yandex.com |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130125040017/http://help.yandex.com/webmaster/?id=1113851 |archive-date=2013-01-25 |url-status=live }}</ref>
Some major [[search engine]]s following this standard include Ask,<ref name="ask-webmasters">{{cite web |title=About Ask.com: Webmasters |url=http://about.ask.com/docs/about/webmasters.shtml |website=About.ask.com |access-date=16 February 2013 |archive-date=27 January 2013 |archive-url=https://web.archive.org/web/20130127134025/http://about.ask.com/docs/about/webmasters.shtml |url-status=live }}</ref> AOL,<ref name="about-aol-search">{{cite web |title=About AOL Search |url=http://search.aol.com/aol/about |website=Search.aol.com |access-date=16 February 2013 |archive-date=13 December 2012 |archive-url=https://web.archive.org/web/20121213134546/http://search.aol.com/aol/about |url-status=dead }}</ref> Baidu,<ref name="baidu-spider">{{cite web |title=Baiduspider |url=http://www.baidu.com/search/spider_english.html |website=Baidu.com |access-date=16 February 2013 |archive-date=6 August 2013 |archive-url=https://web.archive.org/web/20130806131031/http://www.baidu.com/search/spider_english.html |url-status=live }}</ref> Bing,<ref name="bing-blog-robots">{{cite web|url=https://blogs.bing.com/webmaster/2008/06/03/robots-exclusion-protocol-joining-together-to-provide-better-documentation/|title=Robots Exclusion Protocol: joining together to provide better documentation|website=Blogs.bing.com|date=3 June 2008 |archive-url=https://web.archive.org/web/20140818025412/http://blogs.bing.com/webmaster/2008/06/03/robots-exclusion-protocol-joining-together-to-provide-better-documentation/|archive-date=2014-08-18|url-status=live|access-date=16 February 2013}}</ref>  DuckDuckGo,<ref name="duckduckgo-bot">{{cite web|url=https://duckduckgo.com/duckduckbot|website=DuckDuckGo.com|title=DuckDuckGo Bot|access-date=25 April 2017|archive-date=16 February 2017|archive-url=https://web.archive.org/web/20170216043103/https://duckduckgo.com/duckduckbot|url-status=live}}</ref> Kagi,<ref name="KagiBot">{{cite web|url=https://kagi.com/bot|website=Kagi Search|title=Kagi Search KagiBot|access-date=20 November 2024|archive-date=12 April 2024|archive-url=https://web.archive.org/web/20240412192855/https://kagi.com/bot|url-status=live}}</ref> Google,<ref name="google-webmasters-spec">{{cite web |url=https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt |title=Webmasters: Robots.txt Specifications |work=Google Developers |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130115214137/https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt |archive-date=2013-01-15 |url-status=live }}</ref> Yahoo!,<ref name="yahoo-search-is-bing">{{cite web |url=http://help.yahoo.com/kb/index?page=content&y=PROD_SRCH&locale=en_US&id=SLN2217&impressions=true |title=Submitting your website to Yahoo! Search |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130121035801/http://help.yahoo.com/kb/index?page=content&y=PROD_SRCH&locale=en_US&id=SLN2217&impressions=true |archive-date=2013-01-21 |url-status=live }}</ref> and Yandex.<ref name="yandex-robots">{{cite web |url=http://help.yandex.com/webmaster/?id=1113851 |title=Using robots.txt |website=Help.yandex.com |access-date=16 February 2013 |archive-url=https://web.archive.org/web/20130125040017/http://help.yandex.com/webmaster/?id=1113851 |archive-date=2013-01-25 |url-status=live }}</ref>


===Archival sites===
===Archival sites===
Line 74: Line 79:


GPTBot complies with the robots.txt standard and gives advice to web operators about how to disallow it, but ''[[The Verge]]''{{'}}s David Pierce said this only began after "training the underlying models that made it so powerful". Also, some bots are used both for search engines and artificial intelligence, and it may be impossible to block only one of these options.<ref name="Verge"/> ''[[404 Media]]'' reported that companies like [[Anthropic]] and [[Perplexity.ai]] circumvented robots.txt by renaming or spinning up new scrapers to replace the ones that appeared on popular [[blocklist]]s.<ref>{{Cite web |last=Koebler |first=Jason |date=2024-07-29 |title=Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones) |url=https://www.404media.co/websites-are-blocking-the-wrong-ai-scrapers-because-ai-companies-keep-making-new-ones/ |access-date=2024-07-29 |website=404 Media}}</ref>
GPTBot complies with the robots.txt standard and gives advice to web operators about how to disallow it, but ''[[The Verge]]''{{'}}s David Pierce said this only began after "training the underlying models that made it so powerful". Also, some bots are used both for search engines and artificial intelligence, and it may be impossible to block only one of these options.<ref name="Verge"/> ''[[404 Media]]'' reported that companies like [[Anthropic]] and [[Perplexity.ai]] circumvented robots.txt by renaming or spinning up new scrapers to replace the ones that appeared on popular [[blocklist]]s.<ref>{{Cite web |last=Koebler |first=Jason |date=2024-07-29 |title=Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones) |url=https://www.404media.co/websites-are-blocking-the-wrong-ai-scrapers-because-ai-companies-keep-making-new-ones/ |access-date=2024-07-29 |website=404 Media}}</ref>
In 2025, the nonprofit [[RSL Collective]] announced the launch of the [[Really Simple Licensing]] (RSL) open content licensing standard, allowing web publishers to set terms for AI bots in their robots.txt files. Participating companies at launch included Medium, [[Reddit]], and [[Yahoo]].<ref name="tc-10sep2025">{{cite news |last1=Brandom |first1=Russell |title=RSS co-creator launches new protocol for AI data licensing |url=https://techcrunch.com/2025/09/10/rss-co-creator-launches-new-protocol-for-ai-data-licensing/ |access-date=September 10, 2025 |work=[[TechCrunch]] |date=September 10, 2025}}</ref><ref name="verge-10sep2025">{{cite news |last1=Roth |first1=Emma |title=The web has a new system for making AI companies pay up |url=https://www.theverge.com/news/775072/rsl-standard-licensing-ai-publishing-reddit-yahoo-medium |access-date=September 10, 2025 |work=[[The Verge]] |date=September 10, 2025}}</ref><ref name="eg-10sep2025">{{cite news |last1=Shanklin |first1=Will |title=Reddit, Yahoo, Medium and more are adopting a new licensing standard to get compensated for AI scraping |url=https://www.engadget.com/ai/reddit-yahoo-medium-and-more-are-adopting-a-new-licensing-standard-to-get-compensated-for-ai-scraping-180946671.html |access-date=September 10, 2025 |work=[[Engadget]] |date=September 10, 2025}}</ref>


==Security==
==Security==
Despite the use of the terms ''allow'' and ''disallow'', the protocol is purely advisory and relies on the compliance of the [[web robot]]; it cannot enforce any of what is stated in the file. <ref>{{cite web |title=Block URLs with robots.txt: Learn about robots.txt files |url=https://support.google.com/webmasters/answer/6062608 |access-date=2015-08-10 |archive-url=https://web.archive.org/web/20150814013400/https://support.google.com/webmasters/answer/6062608 |archive-date=2015-08-14 |url-status=live }}</ref> Malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide to find disallowed links and go straight to them. While this is sometimes claimed to be a security risk,<ref>{{cite web |url=https://www.theregister.co.uk/2015/05/19/robotstxt/ |title=Robots.txt tells hackers the places you don't want them to look |work=The Register |access-date=August 12, 2015 |archive-url=https://web.archive.org/web/20150821063759/http://www.theregister.co.uk/2015/05/19/robotstxt/ |archive-date=2015-08-21 |url-status=live }}</ref> this sort of ''[[security through obscurity]]'' is discouraged by standards bodies. The [[National Institute of Standards and Technology]] (NIST) in the United States specifically recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."<ref>{{cite journal |last1=Scarfone |first1=K. A. |last2=Jansen |first2=W. |last3=Tracy |first3=M. |date=July 2008 |title=Guide to General Server Security |url=http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf |url-status=live |journal=National Institute of Standards and Technology |doi=10.6028/NIST.SP.800-123 |archive-url=https://web.archive.org/web/20111008115412/http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf |archive-date=2011-10-08 |access-date=August 12, 2015}}</ref> In the context of robots.txt files, security through obscurity is not recommended as a security technique.<ref>{{cite book |author=Sverre H. Huseby |title=Innocent Code: A Security Wake-Up Call for Web Programmers |publisher=John Wiley & Sons |year=2004 |pages=91–92 |isbn=9780470857472 |url=https://books.google.com/books?id=RjVjgPQsKogC&pg=PA92 |access-date=2015-08-12 |archive-url=https://web.archive.org/web/20160401193437/https://books.google.com/books?id=RjVjgPQsKogC&pg=PA92 |archive-date=2016-04-01 |url-status=live }}</ref>
Despite the use of the terms ''allow'' and ''disallow'', the protocol is purely advisory and relies on the compliance of the [[web robot]]; it cannot enforce any of what is stated in the file.<ref>{{cite web |title=Block URLs with robots.txt: Learn about robots.txt files |url=https://support.google.com/webmasters/answer/6062608 |access-date=2015-08-10 |archive-url=https://web.archive.org/web/20150814013400/https://support.google.com/webmasters/answer/6062608 |archive-date=2015-08-14 |url-status=live }}</ref> Malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide to find disallowed links and go straight to them. While this is sometimes claimed to be a security risk,<ref>{{cite web |url=https://www.theregister.co.uk/2015/05/19/robotstxt/ |title=Robots.txt tells hackers the places you don't want them to look |work=The Register |access-date=August 12, 2015 |archive-url=https://web.archive.org/web/20150821063759/http://www.theregister.co.uk/2015/05/19/robotstxt/ |archive-date=2015-08-21 |url-status=live }}</ref> this sort of ''[[security through obscurity]]'' is discouraged by standards bodies. The [[National Institute of Standards and Technology]] (NIST) in the United States specifically recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."<ref>{{cite journal |last1=Scarfone |first1=K. A. |last2=Jansen |first2=W. |last3=Tracy |first3=M. |date=July 2008 |title=Guide to General Server Security |url=http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf |url-status=live |journal=National Institute of Standards and Technology |doi=10.6028/NIST.SP.800-123 |archive-url=https://web.archive.org/web/20111008115412/http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf |archive-date=2011-10-08 |access-date=August 12, 2015}}</ref> In the context of robots.txt files, security through obscurity is not recommended as a security technique.<ref>{{cite book |author=Sverre H. Huseby |title=Innocent Code: A Security Wake-Up Call for Web Programmers |publisher=John Wiley & Sons |year=2004 |pages=91–92 |isbn=9780470857472 |url=https://books.google.com/books?id=RjVjgPQsKogC&pg=PA92 |access-date=2015-08-12 |archive-url=https://web.archive.org/web/20160401193437/https://books.google.com/books?id=RjVjgPQsKogC&pg=PA92 |archive-date=2016-04-01 |url-status=live }}</ref>


==Alternatives==
==Alternatives==
Line 86: Line 93:


==Examples==
==Examples==
This example tells all robots that they can visit all files because the wildcard <code>*</code> stands for all robots and the <code>Disallow</code> directive has no value, meaning no pages are disallowed. Search engine giant Google open-sourced their robots.txt parser,<ref>{{cite web |url=https://github.com/google/robotstxt |title=Google Robots.txt Parser and Matcher Library |access-date=April 13, 2025}}</ref> and recommends testing and validating rules on the robots.txt file using community-built testers such as Tame the Bots <ref>{{cite web |url=https://tamethebots.com/tools/robotstxt-checker |title=Robots.txt Testing & Validator Tool - Tame the Bots |access-date=April 13, 2025}}</ref> and Real Robots Txt.<ref>{{cite web |url=https://www.realrobotstxt.com/ |title=Robots.txt parser based on Google's open source parser from Will Critchlow, CEO of SearchPilot |access-date=April 13, 2025}}</ref>  
This example tells all robots that they can visit all files because the wildcard <code>*</code> stands for all robots and the <code>Disallow</code> directive has no value, meaning no pages are disallowed. Search engine giant Google open-sourced their robots.txt parser,<ref>{{cite web |url=https://github.com/google/robotstxt |title=Google Robots.txt Parser and Matcher Library |website=[[GitHub]] |access-date=April 13, 2025}}</ref> and recommends testing and validating rules on the robots.txt file using community-built testers such as Tame the Bots <ref>{{cite web |url=https://tamethebots.com/tools/robotstxt-checker |title=Robots.txt Testing & Validator Tool - Tame the Bots |access-date=April 13, 2025}}</ref> and Real Robots Txt.<ref>{{cite web |url=https://www.realrobotstxt.com/ |title=Robots.txt parser based on Google's open source parser from Will Critchlow, CEO of SearchPilot |access-date=April 13, 2025}}</ref>


<pre>
<pre>
Line 126: Line 133:


All other files in the specified directory will be processed.
All other files in the specified directory will be processed.


This example tells one specific robot to stay out of a website:
This example tells one specific robot to stay out of a website:
Line 176: Line 182:
It would not prevent the crawling of <code>/something/foo/else</code>, as that would not match the pattern.
It would not prevent the crawling of <code>/something/foo/else</code>, as that would not match the pattern.


The wildcard <code>*</code> allows greater flexibility but may not be recognized by all crawlers, although it is part of the Robots Exclusion Protocol RFC <ref>{{Cite report |url=https://www.rfc-editor.org/rfc/rfc9309.html#name-special-characters |title=Robots Exclusion Protocol |last=Koster |first=Martijn |last2=Illyes |first2=Gary |last3=Zeller |first3=Henner |last4=Sassman |first4=Lizzi |date=September 2022 |publisher=Internet Engineering Task Force |issue=RFC 9309}}</ref>
The wildcard <code>*</code> allows greater flexibility but may not be recognized by all crawlers, although it is part of the Robots Exclusion Protocol RFC <ref>{{Cite report |url=https://www.rfc-editor.org/rfc/rfc9309.html#name-special-characters |title=Robots Exclusion Protocol |last1=Koster |first1=Martijn |last2=Illyes |first2=Gary |last3=Zeller |first3=Henner |last4=Sassman |first4=Lizzi |date=September 2022 |publisher=Internet Engineering Task Force |issue=RFC 9309}}</ref>


A wildcard at the end of a rule in effect does nothing, as that is the standard behaviour.
A wildcard at the end of a rule in effect does nothing, as that is the standard behaviour.
Line 192: Line 198:


===Sitemap===
===Sitemap===
Some crawlers support a <code>Sitemap</code> directive, allowing multiple [[Sitemaps]] in the same <samp>robots.txt</samp> in the form <code>Sitemap: ''full-url''</code>:<ref>{{cite web |url=http://ysearchblog.com/2007/04/11/webmasters-can-now-auto-discover-with-sitemaps/ |title=Yahoo! Search Blog - Webmasters can now auto-discover with Sitemaps |access-date=2009-03-23 |archive-url=https://web.archive.org/web/20090305061841/http://ysearchblog.com/2007/04/11/webmasters-can-now-auto-discover-with-sitemaps/ |archive-date=2009-03-05 |url-status=dead }}</ref><ref>{{cite web |url=https://commoncrawl.org/faq|title=FAQ - Common Crawl |access-date=2025-05-26 |url-status=live |quote= How can I ensure the Common Crawl CCBot can crawl my site effectively? The crawler supports the Sitemap Protocol and utilizes any Sitemap announced in the robots.txt file.}}</ref>
Some crawlers support a <code>Sitemap</code> directive, allowing multiple [[Sitemaps]] in the same <samp>robots.txt</samp> in the form <code>Sitemap: ''full-url''</code>:<ref>{{cite web |url=http://ysearchblog.com/2007/04/11/webmasters-can-now-auto-discover-with-sitemaps/ |title=Yahoo! Search Blog - Webmasters can now auto-discover with Sitemaps |access-date=2009-03-23 |archive-url=https://web.archive.org/web/20090305061841/http://ysearchblog.com/2007/04/11/webmasters-can-now-auto-discover-with-sitemaps/ |archive-date=2009-03-05 |url-status=dead }}</ref><ref>{{cite web |url=https://commoncrawl.org/faq|title=FAQ - Common Crawl |access-date=2025-05-26 |quote= How can I ensure the Common Crawl CCBot can crawl my site effectively? The crawler supports the Sitemap Protocol and utilizes any Sitemap announced in the robots.txt file.}}</ref>
<pre>Sitemap: http://www.example.com/sitemap.xml</pre>
<pre>Sitemap: http://www.example.com/sitemap.xml</pre>


===Universal "*" match===
===Universal "*" match===
The ''Robot Exclusion Standard'' does not mention the "*" character in the <code>Disallow:</code> statement.<ref>{{cite web |url=https://developers.google.com/search/reference/robots_txt?hl=en |title=Robots.txt Specifications |website=Google Developers |access-date=February 15, 2020 |archive-date=November 2, 2019 |archive-url=https://web.archive.org/web/20191102192623/https://developers.google.com/search/reference/robots_txt?hl=en |url-status=live }}</ref><!-- Please note that Google updated their code to match the standard on On July 1, 2019. References older than that may contain old, obsolete information about how Google behaves -->
The ''Robot Exclusion Standard'' does not mention the "*" character in the <code>Disallow:</code> statement.<ref>{{cite web |url=https://developers.google.com/search/reference/robots_txt?hl=en |title=Robots.txt Specifications |website=Google Developers |access-date=February 15, 2020 |archive-date=November 2, 2019 |archive-url=https://web.archive.org/web/20191102192623/https://developers.google.com/search/reference/robots_txt?hl=en |url-status=live }}</ref><!-- Please note that Google updated their code to match the standard on On July 1, 2019. References older than that may contain old, obsolete information about how Google behaves -->
===Content-Signal===
[[Cloudflare]] introduced <code>Content-Signal</code><ref>{{cite web |url=https://contentsignals.org/ |title=ContentSignals website |access-date=2025-09-30 |archive-url=https://web.archive.org/web/20250929163116/https://contentsignals.org/ |archive-date=2025-09-29 |url-status=live }}</ref><ref>{{cite web |url=https://searchengineland.com/cloudflare-content-signals-462538 |title=Cloudflare offers way to block AI Overviews – will Google comply? |access-date=2025-09-30 |archive-url=https://web.archive.org/web/20250926162710/https://searchengineland.com/cloudflare-content-signals-462538|archive-date=2025-09-26 |url-status=live }}</ref> as a directive to suggest acceptable crawler behavior by type, <code>ai-train</code>, <code>ai-input</code>, and <code>search</code> with values of <code>yes</code> or <code>no</code> for each.<ref>{{cite web |url=https://blog.cloudflare.com/content-signals-policy/ |title=Giving users choice with Cloudflare's new Content Signals Policy |access-date=2025-09-30 |archive-url=https://web.archive.org/web/20250930090609/https://blog.cloudflare.com/content-signals-policy/ |archive-date=2025-09-30 |url-status=live }}</ref>
<pre>Content-Signal: ai-train=no, search=yes, ai-input=no</pre>


==Meta tags and headers==
==Meta tags and headers==
Line 218: Line 229:
==See also==
==See also==
{{Portal|Internet}}
{{Portal|Internet}}
{{Div col|colwidth=30em}}
{{Columns-next}}
* <code>[[ads.txt]]</code>, a standard for listing authorized ad sellers
* <code>[[ads.txt]]</code>, a standard for listing authorized ad sellers
* <code>[[security.txt]]</code>, a file to describe the process for security researchers to follow in order to report security vulnerabilities
* <code>[[security.txt]]</code>, a file to describe the process for security researchers to follow in order to report security vulnerabilities
* [[eBay v. Bidder's Edge]]
* ''[[eBay v. Bidder's Edge]]''
* ''[[hiQ Labs v. LinkedIn]]''
* [[Automated Content Access Protocol]] – A failed proposal to extend robots.txt
* [[Automated Content Access Protocol]] – A failed proposal to extend robots.txt
* [[BotSeer]] – Now inactive search engine for robots.txt files
* [[BotSeer]] – Now inactive search engine for robots.txt files
Line 233: Line 245:
* [[noindex]]
* [[noindex]]
* [[Perma.cc]]
* [[Perma.cc]]
* [[Really Simple Licensing]]
* [[Sitemaps]]
* [[Sitemaps]]
* [[Spider trap]]
* [[Spider trap]]
* [[Web archiving]]
* [[Web archiving]]
* [[Web crawler]]
* [[Web crawler]]
{{Div col end}}


==References==
==References==

Latest revision as of 13:25, 16 November 2025

Template:Short description Template:Lowercase title Template:Pp-pc Template:Infobox technology standard robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

The standard, developed in 1994, relies on voluntary compliance. Malicious bots can use the file as a directory of which pages to visit, though standards bodies discourage countering this with security through obscurity. Some archival sites ignore robots.txt. The standard was used in the 1990s to mitigate server overload. In the 2020s, websites began denying bots that collect information for generative artificial intelligence.

The "robots.txt" file can be used in conjunction with sitemaps, another robot inclusion standard for websites.

Search engines use crawlers (bots) to index website content. Without guidance, these bots may crawl unnecessary or irrelevant pages. The Robots.txt file helps control what search engines should or should not index.[1]

History

The standard was proposed by Martijn Koster,[2][3] when working for Nexor[4] in February 1994[5] on the www-talk mailing list, the main communication channel for WWW-related activities at the time. Charles Stross claims to have provoked Koster to suggest robots.txt, after he wrote a badly behaved web crawler that inadvertently caused a denial-of-service attack on Koster's server.[6]

The standard, initially RobotsNotWanted.txt, allowed web developers to specify which bots should not access their website or which pages bots should not access. The internet was small enough in 1994 to maintain a complete list of all bots; server overload was a primary concern. By June 1994 it had become a de facto standard;[7] most complied, including those operated by search engines such as WebCrawler, Lycos, and AltaVista.[8]

On July 1, 2019, Google announced the proposal of the Robots Exclusion Protocol as an official standard under Internet Engineering Task Force.[9] A proposed standardTemplate:Ref RFC was published in September 2022 as RFC 9309.

Standard

A site owner wishing to give instructions to web robots places a text file called Template:Mono in the root of the web site hierarchy (e.g. Template:Mono). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the website. If this file does not exist, web robots assume that the website owner does not wish to place any limitations on crawling the entire site.

A robots.txt file contains instructions for bots indicating which web pages they can and cannot access. Robots.txt files are particularly important for web crawlers from search engines such as Google.

A robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operates on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.[10]

A robots.txt file covers one origin. For websites with multiple subdomains, each subdomain must have its own robots.txt file. If Template:Mono had a robots.txt file but Template:Mono did not, the rules that would apply for Template:Mono would not apply to Template:Mono. In addition, each URI scheme and port needs its own robots.txt file; Template:Mono does not apply to pages under Template:Mono or Template:Mono.

Compliance

The robots.txt protocol is widely complied with by bot operators.[7]

The robots.txt played a role in the 1999 legal case of eBay v. Bidder's Edge,[11] where eBay attempted to block a bot that did not comply with robots.txt, and in May 2000 a court ordered the company operating the bot to stop crawling eBay's servers using any automatic means, by legal injunction on the basis of trespassing.[12][13][11] Bidder's Edge appealed the ruling, but agreed in March 2001 to drop the appeal, pay an undisclosed amount to eBay, and stop accessing eBay's auction information.[14][15]

In 2007 Healthcare Advocates v. Harding, a company was sued for accessing protected web pages archived via The Wayback Machine, despite robots.txt rules denying those pages from the archive. A Pennsylvania court ruled "in this situation, the robots.txt file qualifies as a technological measure" under the DMCA. Due to a malfunction at Internet Archive, Harding could temporarly access these pages from the archive and thus the court found "the Harding firm did not circumvent the protective measure".[16][17][18]

In 2013 Associated Press v. Meltwater U.S. Holdings, Inc. the Associated Press sued Meltwater for copyright infringement and misappropriation over copying of AP news items. Meltwater claimed fair use and they did not require a license, because the content was freely available and not protected by robots.txt. The court decided in March 2013 that "Meltwater’s copying is not protected by the fair use doctrine", mentioning among several factors that "failure […] to employ the robots.txt protocol did not give Meltwater […] license to copy and publish AP content".[19]

Search engines

Some major search engines following this standard include Ask,[20] AOL,[21] Baidu,[22] Bing,[23] DuckDuckGo,[24] Kagi,[25] Google,[26] Yahoo!,[27] and Yandex.[28]

Archival sites

Some web archiving projects ignore robots.txt. Archive Team uses the file to discover more links, such as sitemaps.[29] Co-founder Jason Scott said that "unchecked, and left alone, the robots.txt file ensures no mirroring or reference for items that may have general use and meaning beyond the website's context."[30] In 2017, the Internet Archive announced that it would stop complying with robots.txt directives.[31][7] According to Digital Trends, this followed widespread use of robots.txt to remove historical sites from search engine results, and contrasted with the nonprofit's aim to archive "snapshots" of the internet as it previously existed.[32]

Artificial intelligence

Starting in the 2020s, web operators began using robots.txt to deny access to bots collecting training data for generative AI. In 2023, Originality.AI found that 306 of the thousand most-visited websites blocked OpenAI's GPTBot in their robots.txt file and 85 blocked Google's Google-Extended. Many robots.txt files named GPTBot as the only bot explicitly disallowed on all pages. Denying access to GPTBot was common among news websites such as the BBC and The New York Times. In 2023, blog host Medium announced it would deny access to all artificial intelligence web crawlers as "AI companies have leached value from writers in order to spam Internet readers".[7]

GPTBot complies with the robots.txt standard and gives advice to web operators about how to disallow it, but The VergeTemplate:'s David Pierce said this only began after "training the underlying models that made it so powerful". Also, some bots are used both for search engines and artificial intelligence, and it may be impossible to block only one of these options.[7] 404 Media reported that companies like Anthropic and Perplexity.ai circumvented robots.txt by renaming or spinning up new scrapers to replace the ones that appeared on popular blocklists.[33]

In 2025, the nonprofit RSL Collective announced the launch of the Really Simple Licensing (RSL) open content licensing standard, allowing web publishers to set terms for AI bots in their robots.txt files. Participating companies at launch included Medium, Reddit, and Yahoo.[34][35][36]

Security

Despite the use of the terms allow and disallow, the protocol is purely advisory and relies on the compliance of the web robot; it cannot enforce any of what is stated in the file.[37] Malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide to find disallowed links and go straight to them. While this is sometimes claimed to be a security risk,[38] this sort of security through obscurity is discouraged by standards bodies. The National Institute of Standards and Technology (NIST) in the United States specifically recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."[39] In the context of robots.txt files, security through obscurity is not recommended as a security technique.[40]

Alternatives

Many robots also pass a special user-agent to the web server when fetching content.[41] A web administrator could also configure the server to automatically return failure (or pass alternative content) when it detects a connection using one of the robots.[42][43]

Some sites, such as Google, host a humans.txt file that displays information meant for humans to read.[44] Some sites such as GitHub redirect humans.txt to an About page.[45]

Previously, Google had a joke file hosted at /killer-robots.txt instructing the Terminator not to kill the company founders Larry Page and Sergey Brin.[46][47]

Examples

This example tells all robots that they can visit all files because the wildcard * stands for all robots and the Disallow directive has no value, meaning no pages are disallowed. Search engine giant Google open-sourced their robots.txt parser,[48] and recommends testing and validating rules on the robots.txt file using community-built testers such as Tame the Bots [49] and Real Robots Txt.[50]

User-agent: *
Disallow: 

This example has the same effect, allowing all files rather than prohibiting none.

User-agent: *
Allow: /

The same result can be accomplished with an empty or missing robots.txt file.

This example tells all robots to stay out of a website:

User-agent: *
Disallow: /

This example tells all robots not to enter three directories:

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/

This example tells all robots to stay away from one specific file:

User-agent: *
Disallow: /directory/file.html

All other files in the specified directory will be processed.

This example tells one specific robot to stay out of a website:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot
Disallow: /

This example tells two specific robots not to enter one specific directory:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot
User-agent: Googlebot
Disallow: /private/

Example demonstrating how comments can be used:

# Comments appear after the "#" symbol at the start of a line, or after a directive
User-agent: * # match all bots
Disallow: / # keep them out

It is also possible to list multiple robots with their own rules. The actual robot string is defined by the crawler. A few robot operators, such as Google, support several user-agent strings that allow the operator to deny access to a subset of their services by using specific user-agent strings.[26]

Example demonstrating multiple user-agents:

User-agent: googlebot        # all Google services
Disallow: /private/          # disallow this directory

User-agent: googlebot-news   # only the news service
Disallow: /                  # disallow everything

User-agent: *                # any robot
Disallow: /something/        # disallow this directory

The use of the wildcard * in rules

The directive Disallow: /something/ blocks all files and subdirectories starting with /something/.

In contrast using a wildcard, (if supported by the crawler), allows for more complex patterns in specifying paths and files to allow or disallow from crawling, for example Disallow: /something/*/other blocks URLs such as:

/something/foo/other
/something/bar/other

It would not prevent the crawling of /something/foo/else, as that would not match the pattern.

The wildcard * allows greater flexibility but may not be recognized by all crawlers, although it is part of the Robots Exclusion Protocol RFC [51]

A wildcard at the end of a rule in effect does nothing, as that is the standard behaviour.

Nonstandard extensions

Crawl-delay directive

The crawl-delay value is supported by some crawlers to throttle their visits to the host. Since this value is not part of the standard, its interpretation is dependent on the crawler reading it. It is used when the multiple burst of visits from bots is slowing down the host. Yandex interprets the value as the number of seconds to wait between subsequent visits.[28] Bing defines crawl-delay as the size of a time window (from 1 to 30 seconds) during which BingBot will access a web site only once.[52] Google ignores this directive,[53] but provides an interface in its search console for webmasters, to control the Googlebot's subsequent visits.[54]

User-agent: bingbot
Allow: /
Crawl-delay: 10

Sitemap

Some crawlers support a Sitemap directive, allowing multiple Sitemaps in the same robots.txt in the form Sitemap: full-url:[55][56]

Sitemap: http://www.example.com/sitemap.xml

Universal "*" match

The Robot Exclusion Standard does not mention the "*" character in the Disallow: statement.[57]

Content-Signal

Cloudflare introduced Content-Signal[58][59] as a directive to suggest acceptable crawler behavior by type, ai-train, ai-input, and search with values of yes or no for each.[60]

Content-Signal: ai-train=no, search=yes, ai-input=no

Meta tags and headers

In addition to root-level robots.txt files, robots exclusion directives can be applied at a more granular level through the use of Robots meta tags and X-Robots-Tag HTTP headers. The robots meta tag cannot be used for non-HTML files such as images, text files, or PDF documents. On the other hand, the X-Robots-Tag can be added to non-HTML files by using .htaccess and httpd.conf files.[61]

A "noindex" meta tag

<meta name="robots" content="noindex" />

A "noindex" HTTP response header

X-Robots-Tag: noindex

The X-Robots-Tag is only effective after the page has been requested and the server responds, and the robots meta tag is only effective after the page has loaded, whereas robots.txt is effective before the page is requested. Thus if a page is excluded by a robots.txt file, any robots meta tags or X-Robots-Tag headers are effectively ignored because the robot will not see them in the first place.[61]

Maximum size of a robots.txt file

The Robots Exclusion Protocol requires crawlers to parse at least 500 kibibytes (512000 bytes) of robots.txt files,Template:Ref RFC which Google maintains as a 500 kibibyte file size restriction for robots.txt files.[62]

See also

Script error: No such module "Portal". Template:Columns-next

References

Template:Reflist

Further reading

  • Script error: No such module "citation/CS1".

External links

Template:Authority control

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. Script error: No such module "citation/CS1".
  5. Script error: No such module "citation/CS1".
  6. Script error: No such module "citation/CS1".
  7. a b c d e Script error: No such module "citation/CS1".
  8. Script error: No such module "citation/CS1".
  9. Script error: No such module "citation/CS1".
  10. Script error: No such module "citation/CS1".
  11. a b Script error: No such module "citation/CS1".
  12. Template:Cite court
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Script error: No such module "citation/CS1".
  20. Script error: No such module "citation/CS1".
  21. Script error: No such module "citation/CS1".
  22. Script error: No such module "citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. Script error: No such module "citation/CS1".
  25. Script error: No such module "citation/CS1".
  26. a b Script error: No such module "citation/CS1".
  27. Script error: No such module "citation/CS1".
  28. a b Script error: No such module "citation/CS1".
  29. Script error: No such module "citation/CS1".
  30. Script error: No such module "citation/CS1".
  31. Script error: No such module "citation/CS1".
  32. Script error: No such module "citation/CS1".
  33. Script error: No such module "citation/CS1".
  34. Script error: No such module "citation/CS1".
  35. Script error: No such module "citation/CS1".
  36. Script error: No such module "citation/CS1".
  37. Script error: No such module "citation/CS1".
  38. Script error: No such module "citation/CS1".
  39. Script error: No such module "Citation/CS1".
  40. Script error: No such module "citation/CS1".
  41. Script error: No such module "citation/CS1".
  42. Script error: No such module "citation/CS1".
  43. Script error: No such module "citation/CS1".
  44. Script error: No such module "citation/CS1".
  45. Script error: No such module "citation/CS1".
  46. Script error: No such module "citation/CS1".
  47. Script error: No such module "citation/CS1".
  48. Script error: No such module "citation/CS1".
  49. Script error: No such module "citation/CS1".
  50. Script error: No such module "citation/CS1".
  51. Template:Cite report
  52. Script error: No such module "citation/CS1".
  53. Script error: No such module "citation/CS1".
  54. Script error: No such module "citation/CS1".
  55. Script error: No such module "citation/CS1".
  56. Script error: No such module "citation/CS1".
  57. Script error: No such module "citation/CS1".
  58. Script error: No such module "citation/CS1".
  59. Script error: No such module "citation/CS1".
  60. Script error: No such module "citation/CS1".
  61. a b Script error: No such module "citation/CS1".
  62. Script error: No such module "citation/CS1".