Difference between revisions of "INTERNETARCHIVE.BAK"

From Archiveteam
Jump to navigation Jump to search
(Fix up philosophy)
Line 38: Line 38:
* The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.
* The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.


== WHAT WE HAVE LEARNED SO FAR ==
== WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA) ==


Before we go further, let us quickly cover some facts about the Archive's datastores.
To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.


* Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
The project, known as the ''[[Internet_Archive_Census|Internet Archive Census]]'' resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:
* Some of that is more critical and disaster-worrisome than others. (Web crawls versus TV.)
 
* Some of that is disconnected/redundant data.
* The Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
* Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
* Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
* A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
* A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
* A lot of it are "derives", and marked as such - files that were derived from other files.
* A lot of it are "derives", and marked as such - files that were derived from other files.


Obviously, numbers need to run, but it's less than 20 petabytes, and ultimately, ''20 petabytes isn't that much''
The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs,  multiple revisions of the same website backup).
 
Ultimately, 20 petabytes is 42,000 500gb chunks. 500gb drives are not expensive. People are throwing a lot of them out. And, obviously, we have drives of 1tb, 2tb, even 6tb, which are multiple amounts of "chunks".
 
The vision I have, is this: A reversal, of sorts, of [[ArchiveBot]] - a service that allows people to provide hard drive space and hard drives, such that they can volunteer to be part of a massive Internet Archive "virtual drive" that will hold multiple (yes multiple) copies of the Internet Archive.  
 
 


== THE IMPLEMENTATION ==
== THE IMPLEMENTATION ==

Revision as of 19:34, 16 June 2015

Backing up the Internet Archive

The wonder of the Internet Archive's petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy-to-find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.

In each wave of interest, questions come forward out of the accolades:

  • Why is the Internet Archive only in one (actually, limited) physical space?
  • What is the disaster recovery plan?
  • What steps are being taken to ensure integrity and access?

Some of the answers are essays in themselves, but regarding the protection of the Internet Archive's storehouses of data, a project has been launched to add some real-world information to the theoretical consideration of data protection.

The INTERNETARCHIVE.BAK project (also known as IA.BAK or IABAK) is a combined experiment and research project to back up the Internet Archive's data stores, utilizing zero infrastructure of the Archive itself (save for bandwidth used in download) and, along the way, gain real-world knowledge of what issues and considerations are involved with such a project. Started in April of 2015, the project already has dozens of contributors and partners, and has resulted in a fairly robust environment backing up terabytes of the Archive in multiple locations around the world.

To see the current status of the project's data storage, click here.

THE PHILOSOPHY

The project's main goals include:

  • Maintaining multiple, verified, geographically-disparate copies of Internet Archive data
  • No use of Internet Archive infrastructure, that is, machines or drives (bandwidth just for download)
  • Keep at least 3 external copies in three different physical locations from the Archive.
  • Conduct bi-weekly verification that the data is secure in the external locations.
  • Expire or remove data that has not been verified after 30 days, replacing it with functional space.

The project's secondary goals include:

  • Add ease of use to the end-user clients so the maximum amount of drive space contributors can participate.
  • Offer optional peer-to-peer data provision for both the main site's items and to speed synchronization.
  • Issue useful conclusions as to how the project is conducted.
  • Ultimately provide encrypted versions of some data so that internal/system data can be backed up too.

The project rests on several assumptions:

  • Given an opportunity and good feedback, there are petabytes of volunteer disk space available
  • The Internet Archive will gain more awareness by the general public by this project
  • A non-homogenous environment for the data makes the data that much stronger and disaster-resistant
  • The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.

WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA)

To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.

The project, known as the Internet Archive Census resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:

  • The Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
  • Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
  • Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
  • A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
  • A lot of it are "derives", and marked as such - files that were derived from other files.

The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs, multiple revisions of the same website backup).

THE IMPLEMENTATION

In this, there is an ArchiveBot-like service that keeps track of "The Drive", the backup of the Internet Archive. This "Drive" has sectors that are a certain size, preferably 500gb but smaller amounts might make sense. The "items" being backed up are Internet Archive items, with the derivations not included (so it will keep the uploaded .wav file but not the derived .mp3 and .ogg files). These "sectors" are then checked into the virtual drive, and based on if there's zero, one, two or more than two copies of the item in "The Drive", a color-coding is assigned. (Red, Yellow, Green).

For an end user, this includes two basic functions: copy, and verification.

In copy, you shove your drive into a USB dock or point to some filespace on your RAID or maybe even some space on your laptop's hard drive and say "I contribute this to The Drive". Then the client will back up your assigned items onto the drive. It will do so in a manner than maintains the data integrity, but allow your local drive or directory to have the files accessed (it should not be encrypted, in other words). Once it's done and verified, it is checked into The Drive as a known copy.

In verification, you will need to run the client once every period of time - after a while, say three months, your copy will be considered out of date to The Drive. If you do not check in after, say, six months, your copy will be considered stale and forgotten, and The Drive will lose a copy. (This is part of why you want at least two copies out there.

Copy and Verification are end-user experiences, so they will initially have bumps, but over time, the Drive will have copies everywhere, around the world, in laptops and stacks of drives in closets, and in the case of what will be considered "high availability" items, the number of copies could be in the dozens or hundreds, ensuring fast return if a disaster hits.

CONCERNS

More people can add concerns, but my main one is preparing against Bad Actors, where someone might mess with their copy of the sector of The Drive that they have. Protections and checks will have to be put in to make sure the given backups are in good shape. There will always be continued risk, however, and hence the "high availability" items where there will be lots of copies to "vote". NOTE: Lots of thoughts on bad actors are on the discussion page.

There is also a thought about recovery - we want to be able to have the data pulled back, and that will mean a recovery system of some sort.

STEPS TOWARDS IMPLEMENTATION

I'd like to see us try some prototypes, with a given item set that is limited to, say, 100gb.

There is now a channel, #internetarchive.bak, on EFNet, for discussion about the implementations and testing. Of course, the discussion tab of this page is where in-process tests can be put, so people do not re-do investigations ("Hey, what about...") to the point of fatigue.

To understand what group of data we are dealing with, a Census has been conducted to show public, downloadable items that could be easily added to a group backup. Here is the information page about that sub-project: Internet_Archive_Census. Naturally, in an ideal situation, all data should be backed up, but at 14+ petabytes, there's plenty of data from this Census to go around.

PROPOSALS

We are soliciting proposals for ways to implement this using your favorite distributed system. So far, we have:


See Also