Difference between revisions of "ArchiveBot"

From Archiveteam
Jump to navigation Jump to search
m (Reverted edits by Megalanya1 (talk) to last revision by JesseW)
(35 intermediate revisions by 9 users not shown)
Line 1: Line 1:
[[File:Librarianmotoko.jpg|200px|right|thumb|Imagine Motoko Kusanagi as an archivist.]]
[[File:Librarianmotoko.jpg|200px|right|thumb|Imagine Motoko Kusanagi as an archivist.]]


'''ArchiveBot''' is an [[IRC]] bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, [[Wget_with_WARC_output|records it in a WARC]], and then uploads that WARC to ArchiveTeam servers for eventual injection into the [https://archive.org/search.php?query=collection%3Aarchivebot&sort=-publicdate Internet Archive] (or other archive sites).
'''ArchiveBot''' is an [[IRC]] bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a [[WARC]] file, and then uploads that WARC to ArchiveTeam servers for eventual injection into the [https://archive.org/details/archivebot Internet Archive]'s Wayback Machine (or other archive sites).


== Details ==
== Details ==
To use ArchiveBot, drop by the IRC channel [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login '''#archivebot'''] on EFNet. To interact with ArchiveBot, you issue [https://archivebot.readthedocs.io/en/latest/commands.html '''commands'''] by typing them into the channel. Note that you will need channel operator (<code>@</code>) or voice (<code>+</code>) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.


To use ArchiveBot, drop by [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login '''#archivebot'''] on EFNet. To interact with ArchiveBot, you [http://archivebot.readthedocs.org/en/latest/commands.html issue '''commands'''] by typing it into the channel. Note you will need channel operator (<code>@</code>) or voice (<code>+</code>) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.
The [http://dashboard.at.ninjawedding.org/3 '''ArchiveBot dashboard'''] publicly shows the sites being currently downloaded. The [http://archivebot.at.ninjawedding.org:4567/pipelines pipeline monitor station] shows the status of deployed instances of crawlers. The [http://archive.fart.website/archivebot/viewer/ viewer] assists in browsing and searching archives.
 
The [http://archivebot.at.ninjawedding.org:4567 '''dashboard'''] shows the sites being downloaded currently. The [http://archivebot.at.ninjawedding.org:4567/pipelines pipeline monitor station] shows the status of deployed instances of crawlers. The [http://archive.fart.website/archivebot/viewer/ viewer] assists in browsing and searching archives.
 
Follow [https://twitter.com/archivebot @ArchiveBot] on [[Twitter]]!<ref>Formerly known as [https://twitter.com/atarchivebot @ATArchiveBot]</ref>
 
=== Components ===


== Components ==
IRC interface
IRC interface
:The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.
:The bot listens for commands in the IRC channel and then reports back status on the IRC channel. You can ask it to archive a whole website or single webpage, check whether the URL has been saved, change the delay time between requests, or add some ignore rules to avoid crawling certain web cruft. This IRC interface is collaborative, meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.


Dashboard
Dashboard
:The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.
:The [http://dashboard.at.ninjawedding.org/3 '''ArchiveBot dashboard'''] is a web-based front-end displaying the URLs being downloaded by the various web crawls. Each URL line in the dashboard is categorized by its HTTP code into successes, warnings, and errors. It will be highlighted in yellow or red. The dashboard also provides RSS feeds and [http://dashboard.at.ninjawedding.org/pending a list of pending jobs].


Backend
Backend
:The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.
:The backend contains the database of all jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.


Crawler
Crawler
:The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.
:The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run pipeline nodes connected to the backend. The backend will tell the nodes/pipelines what jobs to run. Once the crawl job has finished, the pipeline reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.


Staging server
Staging server
:The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.
:The staging server, known as [[Fortress of Solitude|Fortress of Solitude (FOS)]], is the place where all the WARC files are temporarily uploaded. Once the current batch has been approved, the files will be uploaded to the Internet Archive for consumption by the Wayback Machine.


ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].
== Source Code ==


=== People ===
ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].


The IRC bot, backend and dashboard is operated by [[User:yipdw|yipdw]]. The staging server is operated by [[User:jscott|SketchCow]]. The crawlers are operated by various people.
== People ==
The main server that controls the IRC bot, pipeline manager backend, and web dashboard is operated by [[User:yipdw|yipdw]], although a few other ArchiveTeam members were given SSH access in late 2017. The staging server [[Fortress of Solitude|Fortress of Solitude (FOS)]], where the data sits for final checks before being moved over to the Internet Archive serves, is operated by [[User:jscott|SketchCow]]. The pipelines are operated by various volunteers around the world.  Each pipeline typically runs two or three web crawl jobs at any given time.


== Volunteer a Node ==
== Volunteer to run a Pipeline ==
'''Note''': New nodes are not being accepted right now. (as of July 2016)
As of November 2017, ArchiveBot has again started accepting applications from volunteers who want to set up new pipelines. You'll need to have a machine with:
 
If you have a machine with  


* lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
* lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
* 512 MB RAM (2 GB recommended, 2 GB swap recommended)
* 512 MB RAM (2 GB recommended, 2 GB swap recommended)
* 10 mbps upload/download speeds (100 mbps recommended)
* 10 Mb/s upload/download speeds (100 Mb/s recommended)
* long-term availability (2 months minimum)
* long-term availability (2 months minimum)
* unrestricted internet accesses (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)
* always-on unrestricted internet access (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)


and would like to volunteer, please review the [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions and contact [[User:yipdw|yipdw]].
Suggestion: the $40/month Digital Ocean droplets (4 GB memory/2 CPU/60 GB hard drive) running Ubuntu work pretty well.


=== Installation ===
Note that we currently only accept pipelines from people who have been active on ArchiveTeam for a while.


Installing the ArchiveBot can be difficult.
If you have a suitable server available and would like to volunteer, please review the [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions. Then contact ArchiveTeam members [[User:Asparagirl|Asparagirl]], [[User:astrid|astrid]], [[User:JustAnotherArchivist|JAA]], [[User:yipdw|yipdw]], or other ArchiveTeam members hanging out in #archivebot, and we can hook you up, adding your machine to the list of approved pipelines, so that it will start processing incoming ArchiveBot jobs.
 
=== Caveats ===
As of August 2018, there are a few things you need to be aware of when operating an ArchiveBot pipeline:
 
* '''Never, ever press ^C on the pipeline.''' Use <code>touch STOP</code> in the <code>ArchiveBot/pipeline</code> directory instead to stop the pipeline.
* Please give access to the pipeline for maintenance work when you're away (e.g. holidays, busy IRL) to someone who's around frequently. This is to avoid situations where jobs or pipelines are stuck for weeks or months without anyone being able to intervene.
* Jobs that crash with an error need to be killed manually using <code>kill -9</code>.
* The log files of jobs that are aborted or crash are not uploaded to the Internet Archive. Please keep the temporary <code>tmp-wpull-*.log.gz</code> files in the pipeline directory, rename them so the filename follows the same format as the JSON file (with extension <code>.log.gz</code> instead of <code>.json</code>), and upload them to FOS manually.
** You can find the job ID for these files in the second line.
** Finding the correct filename can be a bit tricky. You can use the viewer or the [https://github.com/JustAnotherArchivist/archivebot-archives archivebot-archives] repository. Keep in mind that the timestamp in the filename should approximately match the one at the beginning of the log file, though there is usually a difference between the two of at least a few seconds (the log file timestamps being later than the filename timestamp).
** Be careful with the filename if there were multiple jobs for the same URL (i.e. the same job ID).
** Here is a public gist on GitHub explaining step by step how to find the proper log file for your crashed or killed job, how to properly rename it, and how to rsync it up to FOS: [[https://gist.github.com/Asparagirl/155bd3c8ee4b8ad5ed737e45bcad1a5a]]
** Contact [[User:JustAnotherArchivist]] if you need help with this.
* Due to a bug somewhere deep in the network stack, connections get stuck from time to time. This causes jobs to slow down or halt entirely.
** As a workaround, you can use the [https://github.com/JustAnotherArchivist/kill-wpull-connections kill-wpull-connections] script; it requires pgrep, lsof, and gdb. Depending on the machine configuration (specifically, the value of <code>kernel.yama.ptrace_scope</code> in <code>/proc/sys/kernel/yama/ptrace_scope</code>), it may also require root/sudo privileges.
** In very rare cases, you may need to use [http://killcx.sourceforge.net/ killcx] to close the connections.
** [https://github.com/kristrev/tcp_closer tcp_closer] works even when the two methods above fail. It uses the SOCK_DESTROY kernel operation provided by Linux >= 4.5.
* Also due to a bug suspected to be in the network stack, wpull processes sometimes use a lot of RAM (and CPU). If a process uses more than 300 MB continuously, that's likely the case. kill-wpull-connections seems to "fix" this issue, though it takes a while (minutes, rarely even an hour or more) from running the script until the usage actually drops down.
** If wpull paused due to high RAM usage try creating a swap file and forcing RAM pages to swap. wpull only checks RAM usage.
<pre>dd if=/dev/zero of=swapfile bs=1024 count=1024000
mkswap swapfile
swapon swapfile
perl -e '$tmp = "a" x 999999999'
swapoff swapfile
rm swapfile</pre>
* Make sure that you don't have any <code>search</code> or <code>domain</code> line in <code>/etc/resolv.conf</code>. We've grabbed a number of copies of the websites of OVH and Online.net as a result of such lines and broken <code>http://www/</code> links... (Cf [https://github.com/ArchiveTeam/ArchiveBot/issues/318 this issue on GitHub])
 
== Installation ==
Installing the ArchiveBot can be difficult. The [https://github.com/ArchiveTeam/ArchiveBot/blob/master/INSTALL.pipeline Pipeline Install] instructions are online, but are tricky.


But there is a [https://github.com/ArchiveTeam/ArchiveBot/blob/master/.travis.yml Travis.yml automated install script] for [https://travis-ci.org/ArchiveTeam/ArchiveBot Travis-cl] that is designed to test the ArchiveBot.  
But there is a [https://github.com/ArchiveTeam/ArchiveBot/blob/master/.travis.yml Travis.yml automated install script] for [https://travis-ci.org/ArchiveTeam/ArchiveBot Travis-cl] that is designed to test the ArchiveBot.  
Line 56: Line 80:


== Disclaimers ==
== Disclaimers ==
# Everything is provided on a best-effort basis; nothing is guaranteed to work. (We're volunteers, not a support team.)
# Everything is provided on a best-effort basis; nothing is guaranteed to work. (We're volunteers, not a support team.)
# We can decide to stop a job or ban a user if a job is deemed unnecessary. (We don't want to run up operator bandwidth bills and waste Internet Archive donations on costs.)
# We can decide to stop a job or ban a user if a job is deemed unnecessary. (We don't want to run up operator bandwidth bills and waste Internet Archive donations on costs.)
Line 64: Line 87:
Occasionally, we had to ban blocks of IP addresses from the channel. If you think a ban does not apply to you but cannot join the #archivebot channel, please join the main #archiveteam channel instead.
Occasionally, we had to ban blocks of IP addresses from the channel. If you think a ban does not apply to you but cannot join the #archivebot channel, please join the main #archiveteam channel instead.


== Bad Behavior ==
== Bad behavior ==
 
If you are a website operator and you notice ArchiveBot misbehaving, please contact us on #archivebot or #archiveteam on EFnet (see top of page for links).
If you are a website operator and you notice ArchiveBot misbehaving, please contact us on #archivebot or #archiveteam on EFnet (see top of page for links).


ArchiveBot understands [[robots.txt]] (please read the article) but does not match any directives. It uses it for discovering more links such as sitemaps however.
ArchiveBot understands [[robots.txt]] (please read the article) but does not match any directives. It uses it for discovering more links such as sitemaps however.


Also, please remember that '''we are not the [[Internet Archive|Internet Archive]]'''.
Also, please remember that '''we are not the [[Internet Archive|Internet Archive]]'''.
== Trivia ==
* One of the ArchiveBot commands, <code>!yahoo</code>, has been named after [[Yahoo!]]. This command makes the bot archive the page in a more aggressive manner to speed up the archival process.
== Usage, Dashboards, and Completed Job Viewer ==
{| class="wikitable sortable"
|-
! Function !! URL
|-
| ArchiveBot documentation, usage guide, manual || http://archivebot.rtfd.io/
|-
| ArchiveBot traditional dashboard - shows currently active jobs || http://dashboard.at.ninjawedding.org
|-
| ArchiveBot newer-style dashboard - shows currently active jobs || http://dashboard.at.ninjawedding.org/3
|-
| ArchiveBot dashboard - shows pending jobs || http://dashboard.at.ninjawedding.org/pending
|-
| ArchiveBot dashboard - shows ignores for a specific job || http://dashboard.at.ninjawedding.org/ignores/<JOBID>
|-
| ArchiveBot Pipeline Monitor Station - shows the status of the deployed crawler pipeline || http://dashboard.at.ninjawedding.org/pipelines
|-
| ArchiveBot Viewer - explore previously archived items || https://archive.fart.website/archivebot/viewer/
|-
| ChromeBot dashboard - shows pending, running, and finished jobs || http://chromebot.6xq.net/
|}
== Suggested things to focus on archiving with ArchiveBot ==
* [[Company acquisitions and mergers]] - companies that are being, or have been, acquired or merged
* [[ISP Hosting]] - user homepages on ISPs that have not previously been saved
== Related Links ==
* http://archivebot.com is a DNS alias for the ArchiveBot dashboard
* [[Chromebot]] is an IRC bot parallel to ArchiveBot that uses Google Chrome and thus is able to archive JavaScript-heavy and pages with endless scrolling. It is available in the #archivebot channel.
* https://twitter.com/ArchiveBot - dormant current Twitter feed of ArchiveBot activity. Tweets may lag live dashboard. Dormant since 2018-04-12.
* https://twitter.com/ATArchiveBot - former Twitter feed of ArchiveBot activity. Last used 2014-07-28 as @ATArchiveBot was replaced by @ArchiveBot.
* https://archive.org/details/archivebot - ArchiveTeam ArchiveBot collection at the Internet Archive


== More ==
== More ==
Like ArchiveBot? Check out our [[Main_Page|homepage]] and other [[projects]]!
Like ArchiveBot? Check out our [[Main_Page|homepage]] and other [[projects]]!


== Notes ==
== Notes ==
<references/>
<references/>


{{archivebot}}
{{navigation_box}}
{{navigation_box}}
[[Category:Bots]]

Revision as of 06:25, 13 April 2020

Imagine Motoko Kusanagi as an archivist.

ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC file, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive's Wayback Machine (or other archive sites).

Details

To use ArchiveBot, drop by the IRC channel #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing them into the channel. Note that you will need channel operator (@) or voice (+) permissions in order to issue archiving jobs; please ask for assistance or leave a message describing the website you want to archive.

The ArchiveBot dashboard publicly shows the sites being currently downloaded. The pipeline monitor station shows the status of deployed instances of crawlers. The viewer assists in browsing and searching archives.

Components

IRC interface

The bot listens for commands in the IRC channel and then reports back status on the IRC channel. You can ask it to archive a whole website or single webpage, check whether the URL has been saved, change the delay time between requests, or add some ignore rules to avoid crawling certain web cruft. This IRC interface is collaborative, meaning anyone with permission can adjust the parameter of jobs. Note that the bot isn't a chat bot so it will ignore you if it doesn't understand a command.

Dashboard

The ArchiveBot dashboard is a web-based front-end displaying the URLs being downloaded by the various web crawls. Each URL line in the dashboard is categorized by its HTTP code into successes, warnings, and errors. It will be highlighted in yellow or red. The dashboard also provides RSS feeds and a list of pending jobs.

Backend

The backend contains the database of all jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.

Crawler

The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run pipeline nodes connected to the backend. The backend will tell the nodes/pipelines what jobs to run. Once the crawl job has finished, the pipeline reports back to the backend and uploads the WARC files to the staging server. This process is handled by a supervisor script called a pipeline.

Staging server

The staging server, known as Fortress of Solitude (FOS), is the place where all the WARC files are temporarily uploaded. Once the current batch has been approved, the files will be uploaded to the Internet Archive for consumption by the Wayback Machine.

Source Code

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. Contributions welcomed! Any issues or feature requests may be filed at the issue tracker.

People

The main server that controls the IRC bot, pipeline manager backend, and web dashboard is operated by yipdw, although a few other ArchiveTeam members were given SSH access in late 2017. The staging server Fortress of Solitude (FOS), where the data sits for final checks before being moved over to the Internet Archive serves, is operated by SketchCow. The pipelines are operated by various volunteers around the world. Each pipeline typically runs two or three web crawl jobs at any given time.

Volunteer to run a Pipeline

As of November 2017, ArchiveBot has again started accepting applications from volunteers who want to set up new pipelines. You'll need to have a machine with:

  • lots of disk space (40 GB minimum / 200 GB recommended / 500 GB atypical)
  • 512 MB RAM (2 GB recommended, 2 GB swap recommended)
  • 10 Mb/s upload/download speeds (100 Mb/s recommended)
  • long-term availability (2 months minimum)
  • always-on unrestricted internet access (absolutely no firewall/proxies/censorship/ISP-injected-ads/DNS-redirection/free-cafe-wifi)

Suggestion: the $40/month Digital Ocean droplets (4 GB memory/2 CPU/60 GB hard drive) running Ubuntu work pretty well.

Note that we currently only accept pipelines from people who have been active on ArchiveTeam for a while.

If you have a suitable server available and would like to volunteer, please review the Pipeline Install instructions. Then contact ArchiveTeam members Asparagirl, astrid, JAA, yipdw, or other ArchiveTeam members hanging out in #archivebot, and we can hook you up, adding your machine to the list of approved pipelines, so that it will start processing incoming ArchiveBot jobs.

Caveats

As of August 2018, there are a few things you need to be aware of when operating an ArchiveBot pipeline:

  • Never, ever press ^C on the pipeline. Use touch STOP in the ArchiveBot/pipeline directory instead to stop the pipeline.
  • Please give access to the pipeline for maintenance work when you're away (e.g. holidays, busy IRL) to someone who's around frequently. This is to avoid situations where jobs or pipelines are stuck for weeks or months without anyone being able to intervene.
  • Jobs that crash with an error need to be killed manually using kill -9.
  • The log files of jobs that are aborted or crash are not uploaded to the Internet Archive. Please keep the temporary tmp-wpull-*.log.gz files in the pipeline directory, rename them so the filename follows the same format as the JSON file (with extension .log.gz instead of .json), and upload them to FOS manually.
    • You can find the job ID for these files in the second line.
    • Finding the correct filename can be a bit tricky. You can use the viewer or the archivebot-archives repository. Keep in mind that the timestamp in the filename should approximately match the one at the beginning of the log file, though there is usually a difference between the two of at least a few seconds (the log file timestamps being later than the filename timestamp).
    • Be careful with the filename if there were multiple jobs for the same URL (i.e. the same job ID).
    • Here is a public gist on GitHub explaining step by step how to find the proper log file for your crashed or killed job, how to properly rename it, and how to rsync it up to FOS: [[1]]
    • Contact User:JustAnotherArchivist if you need help with this.
  • Due to a bug somewhere deep in the network stack, connections get stuck from time to time. This causes jobs to slow down or halt entirely.
    • As a workaround, you can use the kill-wpull-connections script; it requires pgrep, lsof, and gdb. Depending on the machine configuration (specifically, the value of kernel.yama.ptrace_scope in /proc/sys/kernel/yama/ptrace_scope), it may also require root/sudo privileges.
    • In very rare cases, you may need to use killcx to close the connections.
    • tcp_closer works even when the two methods above fail. It uses the SOCK_DESTROY kernel operation provided by Linux >= 4.5.
  • Also due to a bug suspected to be in the network stack, wpull processes sometimes use a lot of RAM (and CPU). If a process uses more than 300 MB continuously, that's likely the case. kill-wpull-connections seems to "fix" this issue, though it takes a while (minutes, rarely even an hour or more) from running the script until the usage actually drops down.
    • If wpull paused due to high RAM usage try creating a swap file and forcing RAM pages to swap. wpull only checks RAM usage.
dd if=/dev/zero of=swapfile bs=1024 count=1024000
mkswap swapfile
swapon swapfile
perl -e '$tmp = "a" x 999999999'
swapoff swapfile
rm swapfile
  • Make sure that you don't have any search or domain line in /etc/resolv.conf. We've grabbed a number of copies of the websites of OVH and Online.net as a result of such lines and broken http://www/ links... (Cf this issue on GitHub)

Installation

Installing the ArchiveBot can be difficult. The Pipeline Install instructions are online, but are tricky.

But there is a Travis.yml automated install script for Travis-cl that is designed to test the ArchiveBot.

Since it's good enough for testing... it's good enough for installation, right? There must be a way to convert it into an installer script.

Disclaimers

  1. Everything is provided on a best-effort basis; nothing is guaranteed to work. (We're volunteers, not a support team.)
  2. We can decide to stop a job or ban a user if a job is deemed unnecessary. (We don't want to run up operator bandwidth bills and waste Internet Archive donations on costs.)
  3. We're not Internet Archive. (We do what we want.)
  4. We're not the Wayback Machine. Specifically, we are not ia_archiver or archive.org_bot. (We don't run crawlers on behalf of other crawlers.)

Occasionally, we had to ban blocks of IP addresses from the channel. If you think a ban does not apply to you but cannot join the #archivebot channel, please join the main #archiveteam channel instead.

Bad behavior

If you are a website operator and you notice ArchiveBot misbehaving, please contact us on #archivebot or #archiveteam on EFnet (see top of page for links).

ArchiveBot understands robots.txt (please read the article) but does not match any directives. It uses it for discovering more links such as sitemaps however.

Also, please remember that we are not the Internet Archive.

Trivia

  • One of the ArchiveBot commands, !yahoo, has been named after Yahoo!. This command makes the bot archive the page in a more aggressive manner to speed up the archival process.

Usage, Dashboards, and Completed Job Viewer

Function URL
ArchiveBot documentation, usage guide, manual http://archivebot.rtfd.io/
ArchiveBot traditional dashboard - shows currently active jobs http://dashboard.at.ninjawedding.org
ArchiveBot newer-style dashboard - shows currently active jobs http://dashboard.at.ninjawedding.org/3
ArchiveBot dashboard - shows pending jobs http://dashboard.at.ninjawedding.org/pending
ArchiveBot dashboard - shows ignores for a specific job http://dashboard.at.ninjawedding.org/ignores/<JOBID>
ArchiveBot Pipeline Monitor Station - shows the status of the deployed crawler pipeline http://dashboard.at.ninjawedding.org/pipelines
ArchiveBot Viewer - explore previously archived items https://archive.fart.website/archivebot/viewer/
ChromeBot dashboard - shows pending, running, and finished jobs http://chromebot.6xq.net/

Suggested things to focus on archiving with ArchiveBot

Related Links

More

Like ArchiveBot? Check out our homepage and other projects!

Notes


v · t · e         ArchiveBot
Corporations

Facebook (people· GitHub · Google (people· IBM (people· Microsoft (people· Reddit · Telegram · Twitter · Yahoo · Oldest companies

GLAM

National Archives (Other· National Film Archives (Other· National Galleries (Other· National Libraries (Other· National Museums (Other)

Governments

Algeria · Antarctica · Brazil · Cape Verde · Greenland · Iran · Malta · Micronesia · North Korea · Oman · Philippines · Sahrawi Arab Democratic Republic · Spain · Syria · Sudan · Yemen

History and culture

Languages · Memoria Histórica

People

Archivists · Cancer patients · Travelers · People with physical disabilities

Politics

Elections · Venezuela politics · Yellow Vests - Related: Alternative media (political left)

Sports

World championships in 2019 (2019 FIFA Women's World Cup)

Topics

Artificial Intelligence · Astronomy · Banned stuff · Futurology · Micronations · Rare stuff

Wikis

WikiLeaks · Wikis · Wikidata lists

Other

Datasheets · Educational institutions · Internet campaigns

Tutorial: ArchiveBot/Bot documentation & ArchiveBot/Test