Difference between revisions of "INTERNETARCHIVE.BAK"

From Archiveteam
Jump to: navigation, search
(PROPOSALS: embolden git-annex)
Line 69: Line 69:
 
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
 
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
 
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]
 
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]
 +
* [[INTERNETARCHIVE.BAK/nominations]]
  
 
== See Also ==
 
== See Also ==
  
 
* [[Valhalla]]
 
* [[Valhalla]]

Revision as of 17:27, 28 May 2015

Please note that we're not Internet Archive. Also, intense technical discussion is here.

INTERNETARCHIVE.BAK

The wonder of the Internet Archive's petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy to find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.

In each wave of interest, two questions come forward out of the accolades: What is the disaster recovery plan? And why is it in only one place?

The disaster recovery plan is variant but generally reliant on multiple locations for the data, and it is in one place because the fundraising and support methods so far can only provide a certain amount of disaster/backup plans.

Therefore, it is time for Archive Team to launch its most audacious project yet: Backing up the Internet Archive.

Ohreally.gif

WELL THAT IS SERIOUSLY FUCKING IMPOSSIBLE

That is a very natural and understandable reaction. Before we go further, let us quickly cover some facts about the Archive's datastores.

  • Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
  • Some of that is more critical and disaster-worrisome than others. (Web crawls versus TV.)
  • Some of that is disconnected/redundant data.
  • A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
  • A lot of it are "derives", and marked as such - files that were derived from other files.

Obviously, numbers need to run, but it's less than 20 petabytes, and ultimately, 20 petabytes isn't that much

Ultimately, 20 petabytes is 42,000 500gb chunks. 500gb drives are not expensive. People are throwing a lot of them out. And, obviously, we have drives of 1tb, 2tb, even 6tb, which are multiple amounts of "chunks".

The vision I have, is this: A reversal, of sorts, of ArchiveBot - a service that allows people to provide hard drive space and hard drives, such that they can volunteer to be part of a massive Internet Archive "virtual drive" that will hold multiple (yes multiple) copies of the Internet Archive.

THE PHILOSOPHY

There is an effort called LOCKSS (Lots of Copies Keeps Stuff Safe) at Stanford [1] which is meant to provide as many tools and opportunities to save digital data as easily and quickly as possible. At Google, I've been told, they try for at least five copies of data stored in at least three physical locations. This is meant to provide a similar situation for the Internet Archive.

While this is kind of nutty and can be considered a strange ad-hoc situation, I believe that, given the opportunity to play a part in non-geographically-dependent copies of the Internet Archive, many folks will step forward, and we will have a good solution until the expensive "good" solution comes along. Also, it is a very nice statement of support.

THE IMPLEMENTATION

In this, there is an ArchiveBot-like service that keeps track of "The Drive", the backup of the Internet Archive. This "Drive" has sectors that are a certain size, preferably 500gb but smaller amounts might make sense. The "items" being backed up are Internet Archive items, with the derivations not included (so it will keep the uploaded .wav file but not the derived .mp3 and .ogg files). These "sectors" are then checked into the virtual drive, and based on if there's zero, one, two or more than two copies of the item in "The Drive", a color-coding is assigned. (Red, Yellow, Green).

For an end user, this includes two basic functions: copy, and verification.

In copy, you shove your drive into a USB dock or point to some filespace on your RAID or maybe even some space on your laptop's hard drive and say "I contribute this to The Drive". Then the client will back up your assigned items onto the drive. It will do so in a manner than maintains the data integrity, but allow your local drive or directory to have the files accessed (it should not be encrypted, in other words). Once it's done and verified, it is checked into The Drive as a known copy.

In verification, you will need to run the client once every period of time - after a while, say three months, your copy will be considered out of date to The Drive. If you do not check in after, say, six months, your copy will be considered stale and forgotten, and The Drive will lose a copy. (This is part of why you want at least two copies out there.

Copy and Verification are end-user experiences, so they will initially have bumps, but over time, the Drive will have copies everywhere, around the world, in laptops and stacks of drives in closets, and in the case of what will be considered "high availability" items, the number of copies could be in the dozens or hundreds, ensuring fast return if a disaster hits.

CONCERNS

More people can add concerns, but my main one is preparing against Bad Actors, where someone might mess with their copy of the sector of The Drive that they have. Protections and checks will have to be put in to make sure the given backups are in good shape. There will always be continued risk, however, and hence the "high availability" items where there will be lots of copies to "vote". NOTE: Lots of thoughts on bad actors are on the discussion page.

There is also a thought about recovery - we want to be able to have the data pulled back, and that will mean a recovery system of some sort.

STEPS TOWARDS IMPLEMENTATION

I'd like to see us try some prototypes, with a given item set that is limited to, say, 100gb.

There is now a channel, #internetarchive.bak, on EFNet, for discussion about the implementations and testing. Of course, the discussion tab of this page is where in-process tests can be put, so people do not re-do investigations ("Hey, what about...") to the point of fatigue.

To understand what group of data we are dealing with, a Census has been conducted to show public, downloadable items that could be easily added to a group backup. Here is the information page about that sub-project: Internet_Archive_Census. Naturally, in an ideal situation, all data should be backed up, but at 14+ petabytes, there's plenty of data from this Census to go around.

PROPOSALS

We are soliciting proposals for ways to implement this using your favorite distributed system. So far, we have:

See Also