about summary refs log tree commit diff
path: root/bskyweb/static
diff options
context:
space:
mode:
authorbnewbold <bnewbold@robocracy.org>2023-05-07 12:19:56 -0700
committerGitHub <noreply@github.com>2023-05-07 12:19:56 -0700
commit6d9e23b1be92f00304ca267b49a8339e9b505ee7 (patch)
tree8696a178be0afa1655439b1fcda0326a82c69f4c /bskyweb/static
parent0c604ff1c22a7e12be358fbf5eda6d72259632af (diff)
downloadvoidsky-6d9e23b1be92f00304ca267b49a8339e9b505ee7.tar.zst
bskyweb: update robots.txt (#595)
This is to make crawling more explicitly allowed, communicating
expectations.

If we ever end up with "expensive" routes on this service, will want to
add Crawl-Delay.
Diffstat (limited to 'bskyweb/static')
-rw-r--r--bskyweb/static/robots.txt10
1 files changed, 9 insertions, 1 deletions
diff --git a/bskyweb/static/robots.txt b/bskyweb/static/robots.txt
index d3475984e..4f8510d18 100644
--- a/bskyweb/static/robots.txt
+++ b/bskyweb/static/robots.txt
@@ -1 +1,9 @@
-# hello friends!
+# Hello Friends!
+# If you are considering bulk or automated crawling, you may want to look in
+# to our protocol (API), including a firehose of updates. See: https://atproto.com/
+
+# By default, may crawl anything on this domain. HTTP 429 ("backoff") status
+# codes are used for rate-limiting. Up to a handful concurrent requests should
+# be ok.
+User-Agent: *
+Allow: /