Difference between revisions of "ArchiveBot"

From Archiveteam
Jump to navigation Jump to search
(describe components)
Line 5: Line 5:
== Details ==
== Details ==


To use ArchiveBot, drop by [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login #archivebot] on EFNet. To interact with ArchiveBot, you [https://raw2.github.com/ArchiveTeam/ArchiveBot/master/COMMANDS issue commands] by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The [http://archivebot.at.ninjawedding.org:4567 dashboard] shows the sites being downloaded currently.
To use ArchiveBot, drop by [http://chat.efnet.org:9090/?nick=&channels=%23archivebot&Login=Login '''#archivebot'''] on EFNet. To interact with ArchiveBot, you [https://raw2.github.com/ArchiveTeam/ArchiveBot/master/COMMANDS issue '''commands'''] by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The [http://archivebot.at.ninjawedding.org:4567 '''dashboard'''] shows the sites being downloaded currently.
 
Follow [https://twitter.com/atarchivebot @ATArchiveBot] on [[Twitter]]!
 
=== Components ===
 
IRC interface
:The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs.
 
Dashboard
:The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.
 
Backend
:The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.
 
Crawler
:The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server.
 
Staging server
:The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.


ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].  
ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. [[Dev|Contributions welcomed]]! Any issues or feature requests may be filed at [https://github.com/ArchiveTeam/ArchiveBot/issues the issue tracker].  
Follow [https://twitter.com/atarchivebot @ATArchiveBot] on [[Twitter]]!


== More ==
== More ==

Revision as of 05:07, 27 March 2014

Imagine Motoko Kusanagi as an archivist.

ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

Details

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need channel operator permissions in order to issue archiving jobs. The dashboard shows the sites being downloaded currently.

Follow @ATArchiveBot on Twitter!

Components

IRC interface

The bot listens for commands and reports back status on the IRC channel. You can ask it to archive a website or webpage, check whether the URL has been saved, change the delay time between request, or add some ignore rules. This IRC interface is collaborative meaning anyone with permission can adjust the parameter of jobs.

Dashboard

The dashboard displays the URLs being downloaded. Each URL line in the dashboard is categorized into successes, warnings, and errors. It will be highlighted in yellow or red. It also provides RSS feeds.

Backend

The backend contains the database of jobs and several maintenance tasks such as trimming logs and posting Tweets on Twitter. The backend is the centralized portion of ArchiveBot.

Crawler

The crawler will download and spider the website into WARC files. The crawler is the distributed portion of ArchiveBot. Volunteers run nodes connected to the backend. The backend will tell the nodes what jobs to run. Once the node has finished, it reports back to the backend and uploads the WARC files to the staging server.

Staging server

The staging server is the place where all the WARC files are uploaded temporary. Once the current batch has been approved, it will be uploaded to the Internet Archive for consumption by the Wayback Machine.

ArchiveBot's source code can be found at https://github.com/ArchiveTeam/ArchiveBot. Contributions welcomed! Any issues or feature requests may be filed at the issue tracker.

More

Like ArchiveBot? Check out our homepage and other projects!