Difference between revisions of "INTERNETARCHIVE.BAK"

From Archiveteam
Jump to navigation Jump to search
m (Subject/Verb Agreement, commas)
(20 intermediate revisions by 5 users not shown)
Line 1: Line 1:
__NOTOC__
[[Image:Ohreally.gif|right|frame|Backing up the Internet Archive]]
:''Please note that we're not Internet Archive.''
The wonder of the [[Internet Archive]]'s petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy-to-find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.
 
In each wave of interest, questions come forward out of the accolades:  
 
* Why is the Internet Archive only in one (actually, limited) physical space?
* What is the disaster recovery plan?
* What steps are being taken to ensure integrity and access?
 
Some of the answers are essays in themselves, but regarding the protection of the Internet Archive's storehouses of data, a project has been launched to add some real-world information to the theoretical consideration of data protection.
 
The '''INTERNETARCHIVE.BAK project''' (also known as '''IA.BAK''' or '''IABAK''') is a combined experiment and research project to back up the Internet Archive's data stores, utilizing zero infrastructure of the Archive itself (save for bandwidth used in download) and, along the way, gain real-world knowledge of what issues and considerations are involved with such a project. Started in April of 2015, the project already has dozens of contributors and partners, and has resulted in a fairly robust environment backing up terabytes of the Archive in multiple locations around the world.
 
''To see the current status of the project's data storage and to learn how you can contribute disk space and bandwidth, click [http://iabak.archiveteam.org here]''.
 
== THE PHILOSOPHY ==
 
The project's main goals include:
 
* Maintaining multiple, verified, geographically-disparate copies of Internet Archive data
* No use of Internet Archive infrastructure, that is, machines or drives (bandwidth just for download)
* Keep at least 3 external copies in three different physical locations from the Archive.
* Conduct bi-weekly verification that the data is secure in the external locations.
* Expire or remove data that has not been verified after 30 days, replacing it with functional space.
 
The project's secondary goals include:
 
* Add ease of use to the end-user clients so the maximum amount of drive space contributors can participate.
* Offer optional peer-to-peer data provision for both the main site's items and to speed synchronization.
* Issue useful conclusions as to how the project is conducted.
* Ultimately provide encrypted versions of some data so that internal/system data can be backed up too.
 
The project rests on several assumptions:


== INTERNETARCHIVE.BAK ==
* Given an opportunity and good feedback, there are petabytes of volunteer disk space available
* The Internet Archive will gain more awareness by the general public by this project
* A non-homogenous environment for the data makes the data that much stronger and disaster-resistant
* The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.


The wonder of the [[Internet Archive]]'s petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy to find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.
== WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA) ==


In each wave of interest, two questions come forward out of the accolades: What is the disaster recovery plan? And why is it in only one place?
To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.


The disaster recovery plan is variant but generally reliant on multiple locations for the data, and it is in one place because the fundraising and support methods so far can only provide a certain amount of disaster/backup plans.
The project, known as the ''[[Internet_Archive_Census|Internet Archive Census]]'' resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:


Therefore, it is time for Archive Team to launch its most audacious project yet: Backing up the Internet Archive.
* The Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
* Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
* Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
* A lot of the data — the vast, vast majority — is extremely static, and meant to live forever.
* A lot of the files are "derives", and marked as such - files that were derived from other files.


== WELL THAT IS SERIOUSLY FUCKING IMPOSSIBLE ==
The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs,  multiple revisions of the same website backup).


That is a very natural and understandable reaction. Before we go further, let us quickly cover some facts about the Archive's datastores.
== WHAT WE HAVE DONE SO FAR (ABOUT THE IMPLEMENTATION) ==


* Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
[[Image:gitannex.png|right]]
* Some of that is more critical and disaster-worrisome than others. (Web crawls versus TV.)
* Some of that is disconnected/redundant data.
* A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
* A lot of it are "derives", and marked as such - files that were derived from other files.


Obviously, numbers need to run, but it's less than 20 petabytes, and ultimately, ''20 petabytes isn't that much''
There is now a channel, '''#internetarchive.bak''', on EFNet, for discussion about the project as well as bot-driven updates about code changes and new contributors. It is occasionally very active and mostly quiet in between new issues being handled. It is not required to use as a contributor of disk space, but is an excellent resource for questions.


Ultimately, 20 petabytes is 42,000 500gb chunks. 500gb drives are not expensive. People are throwing a lot of them out. And, obviously, we have drives of 1tb, 2tb, even 6tb, which are multiple amounts of "chunks".
After multiple discussions, the back-end for this phase of this project is using the [http://git-annex.branchable.com/ git-annex] suite of tools. Reasons include the maturity of the project and the willing contribution of time by the git-annex code designer and maintainer, Joey Hess (also a co-founder of Archive Team). ''Other possible software suites are listed at the end of this Wiki entry and might play a part in future revisions of the project.''


The vision I have, is this: A reversal, of sorts, of [[ArchiveBot]] - a service that allows people to provide hard drive space and hard drives, such that they can volunteer to be part of a massive Internet Archive "virtual drive" that will hold multiple (yes multiple) copies of the Internet Archive.
To read the git-annex overview and proposal, read here: '''[[INTERNETARCHIVE.BAK/git-annex_implementation]]'''


== THE PHILOSOPHY ==
To read the git-annex implementation documentation, read it here: '''[https://git-annex.branchable.com/design/iabackup/ https://git-annex.branchable.com/design/iabackup/]'''


There is an effort called LOCKSS (Lots of Copies Keeps Stuff Safe) at Stanford [http://www.lockss.org/] which is meant to provide as many tools and opportunities to save digital data as easily and quickly as possible. At Google, I've been told, they try for at least five copies of data stored in at least three physical locations. This is meant to provide a similar situation for the Internet Archive.
== CHOOSING WHICH DATA AT THE ARCHIVE TO BACK UP USING IA.BAK ==


While this is kind of nutty and can be considered a strange ad-hoc situation, I believe that, given the opportunity to play a part in non-geographically-dependent copies of the Internet Archive, many folks will step forward, and we will have a good solution until the expensive "good" solution comes along. Also, it is a very nice statement of support.
As space is relatively limited to provide to IA.BAK, the project has had a second task line of choosing which public-facing data at the Internet Archive to push into IA.BAK. Generally, the group is looking for:


== THE IMPLEMENTATION ==
* Data unique enough that it is predominantly at the Internet Archive
* Data that does not flood the capacity of the project, i.e. hundreds of Terabytes
* Data that does not generally limit which countries can host it
* Data that is generally smaller in size, yet still unique


In this, there is an ArchiveBot-like service that keeps track of "The Drive", the backup of the Internet Archive. This "Drive" has sectors that are a certain size, preferably 500gb but smaller amounts might make sense. The "items" being backed up are Internet Archive items, with the derivations not included (so it will keep the uploaded .wav file but not the derived .mp3 and .ogg files). These "sectors" are then checked into the virtual drive, and based on if there's zero, one, two or more than two copies of the item in "The Drive", a color-coding is assigned. (Red, Yellow, Green).
There is now a page to nominate sets of data at the Internet Archive that should be added to future shards:


For an end user, this includes two basic functions: copy, and verification.
* [[INTERNETARCHIVE.BAK/nominations]]


In copy, you shove your drive into a USB dock or point to some filespace on your RAID or maybe even some space on your laptop's hard drive and say "I contribute this to The Drive". Then the client will back up your assigned items onto the drive. It will do so in a manner than maintains the data integrity, but allow your local drive or directory to have the files accessed (it should not be encrypted, in other words). Once it's done and verified, it is checked into The Drive as a known copy.
The project status page has methods for seeing what collections are now in the project, although it lacks simple search (yet).


In verification, you will need to run the client once every period of time - after a while, say three months, your copy will be considered out of date to The Drive. If you do not check in after, say, six months, your copy will be considered stale and forgotten, and The Drive will lose a copy. (This is part of why you want at least two copies out there.
== KEEPING TRACK OF THE DOWNLOAD AND BACKUP PROGRESS ==
[[File:iabakpage.png|300px|right]]


Copy and Verification are end-user experiences, so they will initially have bumps, but over time, the Drive will have copies everywhere, around the world, in laptops and stacks of drives in closets, and in the case of what will be considered "high availability" items, the number of copies could be in the dozens or hundreds, ensuring fast return if a disaster hits.
The IA.BAK project has a [http://iabak.archiveteam.org/ page] that provides graphical overview of the progress of the project. Updated frequently, it shows total amount of space backed up, the status (red to green) of amount of backups of a given set of data, and timelines of how data is being acquired from the original sources. It also includes maps to show the geographical disparity of the clients, as well as how the clients are performing.


== CONCERNS ==
It is a continual challenge, and an ongoing one, to make navigation of these pages intuitive and informative. As the dataset grows in size and complexity, major changes to the layout of the pages will continue.


More people can add concerns, but my main one is preparing against Bad Actors, where someone might mess with their copy of the sector of The Drive that they have. Protections and checks will have to be put in to make sure the given backups are in good shape. There will always be continued risk, however, and hence the "high availability" items where there will be lots of copies to "vote". NOTE: Lots of thoughts on bad actors are on the discussion page.
As with everything else, the infrastructure to maintain this page is completely separate from Internet Archive machines and resources.


There is also a thought about recovery - we want to be able to have the data pulled back, and that will mean a recovery system of some sort.
== CONCERNS ==


== STEPS TOWARDS IMPLEMENTATION ==
Since the beginning, there have been a number of concerns voiced about the project, and while most can't be glibly dismissed, it is good to at least acknowledge them (as they tend to take the first minutes of a conversation about the project).


I'd like to see us try some prototypes, with a given item set that is limited to, say, 100gb.  
* '''Bad Actors''' will always be a continued concern, as person or persons determined to prevent the backing up of data could execute endless actions to slow down, corrupt, or ruin their contributions of space.


There is now a channel, '''#internetarchive.bak''', on EFNet, for discussion about the implementations and testing. Of course, the discussion tab of this page is where in-process tests can be put, so people do not re-do investigations ("Hey, what about...") to the point of fatigue.
* '''The Great Restore''' is the time when data from IA.BAK must return to its home to handle a physical or software-based failure of the Internet Archive's data stores. (This could be anything from an earthquake to the dropping of an EMP). A restoring would definitely involve the leasing of a large amount of upload bandwidth and some amount of physical transfer.


== PROPOSALS ==
== See Also and Additional Information ==


We are soliciting proposals for ways to implement this using your favorite distributed system. So far, we have:
Alternate proposed software suites for this project besides git-annex:


* [[INTERNETARCHIVE.BAK/git-annex_implementation]]
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]


== See Also ==
* [[Valhalla]] - Valhalla was a discussion about the "ultimate home" for uploaded Archive Team data, be it the Internet Archive, elsewhere, or both. IA.BAK is an example of a potential shared home, although Archive Team projects have not generally been backed up using IA.BAK, yet.
 
__NOTOC__


* [[Valhalla]]
{{Navigation box}}

Revision as of 23:15, 3 December 2016

Backing up the Internet Archive

The wonder of the Internet Archive's petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy-to-find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.

In each wave of interest, questions come forward out of the accolades:

  • Why is the Internet Archive only in one (actually, limited) physical space?
  • What is the disaster recovery plan?
  • What steps are being taken to ensure integrity and access?

Some of the answers are essays in themselves, but regarding the protection of the Internet Archive's storehouses of data, a project has been launched to add some real-world information to the theoretical consideration of data protection.

The INTERNETARCHIVE.BAK project (also known as IA.BAK or IABAK) is a combined experiment and research project to back up the Internet Archive's data stores, utilizing zero infrastructure of the Archive itself (save for bandwidth used in download) and, along the way, gain real-world knowledge of what issues and considerations are involved with such a project. Started in April of 2015, the project already has dozens of contributors and partners, and has resulted in a fairly robust environment backing up terabytes of the Archive in multiple locations around the world.

To see the current status of the project's data storage and to learn how you can contribute disk space and bandwidth, click here.

THE PHILOSOPHY

The project's main goals include:

  • Maintaining multiple, verified, geographically-disparate copies of Internet Archive data
  • No use of Internet Archive infrastructure, that is, machines or drives (bandwidth just for download)
  • Keep at least 3 external copies in three different physical locations from the Archive.
  • Conduct bi-weekly verification that the data is secure in the external locations.
  • Expire or remove data that has not been verified after 30 days, replacing it with functional space.

The project's secondary goals include:

  • Add ease of use to the end-user clients so the maximum amount of drive space contributors can participate.
  • Offer optional peer-to-peer data provision for both the main site's items and to speed synchronization.
  • Issue useful conclusions as to how the project is conducted.
  • Ultimately provide encrypted versions of some data so that internal/system data can be backed up too.

The project rests on several assumptions:

  • Given an opportunity and good feedback, there are petabytes of volunteer disk space available
  • The Internet Archive will gain more awareness by the general public by this project
  • A non-homogenous environment for the data makes the data that much stronger and disaster-resistant
  • The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.

WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA)

To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.

The project, known as the Internet Archive Census resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:

  • The Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
  • Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
  • Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
  • A lot of the data — the vast, vast majority — is extremely static, and meant to live forever.
  • A lot of the files are "derives", and marked as such - files that were derived from other files.

The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs, multiple revisions of the same website backup).

WHAT WE HAVE DONE SO FAR (ABOUT THE IMPLEMENTATION)

Gitannex.png

There is now a channel, #internetarchive.bak, on EFNet, for discussion about the project as well as bot-driven updates about code changes and new contributors. It is occasionally very active and mostly quiet in between new issues being handled. It is not required to use as a contributor of disk space, but is an excellent resource for questions.

After multiple discussions, the back-end for this phase of this project is using the git-annex suite of tools. Reasons include the maturity of the project and the willing contribution of time by the git-annex code designer and maintainer, Joey Hess (also a co-founder of Archive Team). Other possible software suites are listed at the end of this Wiki entry and might play a part in future revisions of the project.

To read the git-annex overview and proposal, read here: INTERNETARCHIVE.BAK/git-annex_implementation

To read the git-annex implementation documentation, read it here: https://git-annex.branchable.com/design/iabackup/

CHOOSING WHICH DATA AT THE ARCHIVE TO BACK UP USING IA.BAK

As space is relatively limited to provide to IA.BAK, the project has had a second task line of choosing which public-facing data at the Internet Archive to push into IA.BAK. Generally, the group is looking for:

  • Data unique enough that it is predominantly at the Internet Archive
  • Data that does not flood the capacity of the project, i.e. hundreds of Terabytes
  • Data that does not generally limit which countries can host it
  • Data that is generally smaller in size, yet still unique

There is now a page to nominate sets of data at the Internet Archive that should be added to future shards:

The project status page has methods for seeing what collections are now in the project, although it lacks simple search (yet).

KEEPING TRACK OF THE DOWNLOAD AND BACKUP PROGRESS

Iabakpage.png

The IA.BAK project has a page that provides graphical overview of the progress of the project. Updated frequently, it shows total amount of space backed up, the status (red to green) of amount of backups of a given set of data, and timelines of how data is being acquired from the original sources. It also includes maps to show the geographical disparity of the clients, as well as how the clients are performing.

It is a continual challenge, and an ongoing one, to make navigation of these pages intuitive and informative. As the dataset grows in size and complexity, major changes to the layout of the pages will continue.

As with everything else, the infrastructure to maintain this page is completely separate from Internet Archive machines and resources.

CONCERNS

Since the beginning, there have been a number of concerns voiced about the project, and while most can't be glibly dismissed, it is good to at least acknowledge them (as they tend to take the first minutes of a conversation about the project).

  • Bad Actors will always be a continued concern, as person or persons determined to prevent the backing up of data could execute endless actions to slow down, corrupt, or ruin their contributions of space.
  • The Great Restore is the time when data from IA.BAK must return to its home to handle a physical or software-based failure of the Internet Archive's data stores. (This could be anything from an earthquake to the dropping of an EMP). A restoring would definitely involve the leasing of a large amount of upload bandwidth and some amount of physical transfer.

See Also and Additional Information

Alternate proposed software suites for this project besides git-annex:

  • Valhalla - Valhalla was a discussion about the "ultimate home" for uploaded Archive Team data, be it the Internet Archive, elsewhere, or both. IA.BAK is an example of a potential shared home, although Archive Team projects have not generally been backed up using IA.BAK, yet.