Difference between revisions of "INTERNETARCHIVE.BAK"

From Archiveteam
Jump to: navigation, search
(STEPS TOWARDS IMPLEMENTATION)
m (Add link to iabak-sharp implementation)
 
(21 intermediate revisions by 11 users not shown)
Line 1: Line 1:
__NOTOC__
+
[[Image:Ohreally.gif|right|frame|Backing up the Internet Archive]]
:''Please note that we're not Internet Archive. Also, intense technical discussion is [[Talk:INTERNETARCHIVE.BAK|here]].''
+
The wonder of the [[Internet Archive]]'s petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy-to-find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.
 +
 
 +
In each wave of interest, questions come forward out of the accolades:  
 +
 
 +
* Why is the Internet Archive only in one (actually, limited) physical space?
 +
* What is the disaster recovery plan?
 +
* What steps are being taken to ensure integrity and access?
 +
 
 +
Some of the answers are essays in themselves, but regarding the protection of the Internet Archive's storehouses of data, a project has been launched to add some real-world information to the theoretical consideration of data protection.
 +
 
 +
The '''INTERNETARCHIVE.BAK project''' (also known as '''IA.BAK''' or '''IABAK''') is a combined experiment and research project to back up the Internet Archive's data stores, utilizing zero infrastructure of the Archive itself (save for bandwidth used in download) and, along the way, gain real-world knowledge of what issues and considerations are involved with such a project. Started in April 2015, the project already has dozens of contributors and partners, and has resulted in a fairly robust environment backing up terabytes of the Archive in multiple locations around the world.
 +
 
 +
''Visit http://iabak.archiveteam.org to see the current status of the project's data storage and to learn how you can contribute disk space and bandwidth.''.
 +
 
 +
'''IA.BAK has been broken and unmaintained since about December 2016. The above status page is not accurate as of December 2019.'''
 +
 
 +
== THE PHILOSOPHY ==
 +
 
 +
The project's main goals include:
 +
 
 +
* Maintaining multiple, verified, geographically-disparate copies of Internet Archive data
 +
* No use of Internet Archive infrastructure, that is, machines or drives (bandwidth just for download)
 +
* Keep at least 3 external copies in three different physical locations from the Archive.
 +
* Conduct bi-weekly verification that the data is secure in the external locations.
 +
* Expire or remove data that has not been verified after 30 days, replacing it with functional space.
 +
 
 +
The project's secondary goals include:
  
== INTERNETARCHIVE.BAK ==
+
* Add ease of use to the end-user clients so the maximum amount of drive space contributors can participate.
 +
* Offer optional peer-to-peer data provision for both the main site's items and to speed synchronization.
 +
* Issue useful conclusions as to how the project is conducted.
 +
* Ultimately provide encrypted versions of some data so that internal/system data can be backed up too.
  
The wonder of the [[Internet Archive]]'s petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy to find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.
+
The project rests on several assumptions:
  
In each wave of interest, two questions come forward out of the accolades: What is the disaster recovery plan? And why is it in only one place?
+
* Given an opportunity and good feedback, there are petabytes of volunteer disk space available
 +
* The Internet Archive will gain more awareness by the general public by this project
 +
* A non-homogeneous environment for the data makes the data that much stronger and disaster-resistant
 +
* The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.
  
The disaster recovery plan is variant but generally reliant on multiple locations for the data, and it is in one place because the fundraising and support methods so far can only provide a certain amount of disaster/backup plans.
+
== WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA) ==
  
Therefore, it is time for Archive Team to launch its most audacious project yet: Backing up the Internet Archive.
+
To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.
  
[[Image:Ohreally.gif|center]]
+
The project, known as the ''[[Internet_Archive_Census|Internet Archive Census]]'' resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:
  
== WELL THAT IS SERIOUSLY FUCKING IMPOSSIBLE ==
+
* The Internet Archive has roughly 21 petabytes of unique data at this juncture. (Growing daily.)
 +
* Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
 +
* Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
 +
* A lot of the data — the vast, vast majority — is extremely static, and meant to live forever.
 +
* A lot of the files are "derives", and marked as such - files that were derived from other files.
 +
* The Internet Archive reserves the right to delete uploaded content.
  
That is a very natural and understandable reaction. Before we go further, let us quickly cover some facts about the Archive's datastores.
+
The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs,  multiple revisions of the same website backup).
  
* Internet Archive has roughly 21 petabytes of unique data at this juncture. (It grows daily.)
+
== WHAT WE HAVE DONE SO FAR (ABOUT THE IMPLEMENTATION) ==
* Some of that is more critical and disaster-worrisome than others. (Web crawls versus TV.)
 
* Some of that is disconnected/redundant data.
 
* A lot of it, the vast, vast majority, is extremely static, and meant to live forever.
 
* A lot of it are "derives", and marked as such - files that were derived from other files.
 
  
Obviously, numbers need to run, but it's less than 20 petabytes, and ultimately, ''20 petabytes isn't that much''
+
[[Image:gitannex.png|right]]
  
Ultimately, 20 petabytes is 42,000 500gb chunks. 500gb drives are not expensive. People are throwing a lot of them out. And, obviously, we have drives of 1tb, 2tb, even 6tb, which are multiple amounts of "chunks".
+
There is a channel, '''#internetarchive.bak''', on EFNet, for discussion about the project as well as bot-driven updates about code changes and new contributors. It is occasionally very active and mostly quiet in between new issues being handled. It is not required to use as a contributor of disk space, but is an excellent resource for questions.
  
The vision I have, is this: A reversal, of sorts, of [[ArchiveBot]] - a service that allows people to provide hard drive space and hard drives, such that they can volunteer to be part of a massive Internet Archive "virtual drive" that will hold multiple (yes multiple) copies of the Internet Archive.  
+
After multiple discussions, the back-end for this phase of this project is using the [http://git-annex.branchable.com/ git-annex] suite of tools. Reasons include the maturity of the project and the willing contribution of time by the git-annex code designer and maintainer, Joey Hess (also a co-founder of Archive Team). ''Other possible software suites are listed at the end of this Wiki entry and might play a part in future revisions of the project.''
  
== THE PHILOSOPHY ==
+
To read the git-annex overview and proposal, go to: '''[[INTERNETARCHIVE.BAK/git-annex_implementation]]'''
  
There is an effort called LOCKSS (Lots of Copies Keeps Stuff Safe) at Stanford [http://www.lockss.org/] which is meant to provide as many tools and opportunities to save digital data as easily and quickly as possible. At Google, I've been told, they try for at least five copies of data stored in at least three physical locations. This is meant to provide a similar situation for the Internet Archive.
+
To read the git-annex implementation documentation, go to: '''[https://git-annex.branchable.com/design/iabackup/ https://git-annex.branchable.com/design/iabackup/]'''
  
While this is kind of nutty and can be considered a strange ad-hoc situation, I believe that, given the opportunity to play a part in non-geographically-dependent copies of the Internet Archive, many folks will step forward, and we will have a good solution until the expensive "good" solution comes along. Also, it is a very nice statement of support.
+
== CHOOSING WHICH DATA AT THE ARCHIVE TO BACK UP USING IA.BAK ==
  
== THE IMPLEMENTATION ==
+
As space is relatively limited to provide to IA.BAK, the project has had a second task line of choosing which public-facing data at the Internet Archive to push into IA.BAK. Generally, the group is looking for:
  
In this, there is an ArchiveBot-like service that keeps track of "The Drive", the backup of the Internet Archive. This "Drive" has sectors that are a certain size, preferably 500gb but smaller amounts might make sense. The "items" being backed up are Internet Archive items, with the derivations not included (so it will keep the uploaded .wav file but not the derived .mp3 and .ogg files). These "sectors" are then checked into the virtual drive, and based on if there's zero, one, two or more than two copies of the item in "The Drive", a color-coding is assigned. (Red, Yellow, Green).
+
* Data unique enough that it is predominantly at the Internet Archive
 +
* Data that does not flood the capacity of the project, i.e. hundreds of Terabytes
 +
* Data that does not generally limit which countries can host it
 +
* Data that is generally smaller in size, yet still unique
  
For an end user, this includes two basic functions: copy, and verification.
+
There is now a page to nominate sets of data at the Internet Archive that should be added to future shards:
  
In copy, you shove your drive into a USB dock or point to some filespace on your RAID or maybe even some space on your laptop's hard drive and say "I contribute this to The Drive". Then the client will back up your assigned items onto the drive. It will do so in a manner than maintains the data integrity, but allow your local drive or directory to have the files accessed (it should not be encrypted, in other words). Once it's done and verified, it is checked into The Drive as a known copy.
+
* [[INTERNETARCHIVE.BAK/nominations]]
  
In verification, you will need to run the client once every period of time - after a while, say three months, your copy will be considered out of date to The Drive. If you do not check in after, say, six months, your copy will be considered stale and forgotten, and The Drive will lose a copy. (This is part of why you want at least two copies out there.
+
The project status page has methods for seeing what collections are now in the project, although it lacks simple search (yet).
  
Copy and Verification are end-user experiences, so they will initially have bumps, but over time, the Drive will have copies everywhere, around the world, in laptops and stacks of drives in closets, and in the case of what will be considered "high availability" items, the number of copies could be in the dozens or hundreds, ensuring fast return if a disaster hits.
+
== KEEPING TRACK OF THE DOWNLOAD AND BACKUP PROGRESS ==
 +
[[File:iabakpage.png|300px|right]]
  
== CONCERNS ==
+
The IA.BAK project page, http://iabak.archiveteam.org/, provides a graphical overview of the progress of the project. Updated frequently, it shows total amount of space backed up, the status (red to green) of amount of backups of a given set of data, and timelines of how data is being acquired from the original sources. It also includes maps to show the geographical disparity of the clients, as well as how the clients are performing.
  
More people can add concerns, but my main one is preparing against Bad Actors, where someone might mess with their copy of the sector of The Drive that they have. Protections and checks will have to be put in to make sure the given backups are in good shape. There will always be continued risk, however, and hence the "high availability" items where there will be lots of copies to "vote". NOTE: Lots of thoughts on bad actors are on the discussion page.
+
It is a continual challenge, and an ongoing one, to make navigation of these pages intuitive and informative. As the dataset grows in size and complexity, major changes to the layout of the pages will continue.
  
There is also a thought about recovery - we want to be able to have the data pulled back, and that will mean a recovery system of some sort.
+
As with everything else, the infrastructure to maintain this page is completely separate from Internet Archive machines and resources.
  
== STEPS TOWARDS IMPLEMENTATION ==
+
== CONCERNS ==
  
I'd like to see us try some prototypes, with a given item set that is limited to, say, 100gb.  
+
Since the beginning, there have been a number of concerns voiced about the project, and while most can't be glibly dismissed, it is good to at least acknowledge them (as they tend to take the first minutes of a conversation about the project).
  
There is now a channel, '''#internetarchive.bak''', on EFNet, for discussion about the implementations and testing. Of course, the discussion tab of this page is where in-process tests can be put, so people do not re-do investigations ("Hey, what about...") to the point of fatigue.
+
* '''Bad Actors''' will always be a continued concern, as person or persons determined to prevent the backing up of data could execute endless actions to slow down, corrupt, or ruin their contributions of space.
  
To understand what group of data we are dealing with, a Census has been conducted to show public, downloadable items that could be easily added to a group backup. Here is the information page about that sub-project: [[Internet_Archive_Census]]. Naturally, in an ideal situation, ''all'' data should be backed up, but at 14+ petabytes, there's plenty of data from this Census to go around.
+
* '''The Great Restore''' is the time when data from IA.BAK must return to its home to handle a physical or software-based failure of the Internet Archive's data stores. (This could be anything from an earthquake, fibre line cut, or to the dropping of an EMP). A restoring would definitely involve the leasing of a large amount of upload bandwidth and some amount of physical transfer.
  
== PROPOSALS ==
+
== See Also and Additional Information ==
  
We are soliciting proposals for ways to implement this using your favorite distributed system. So far, we have:
+
Alternate proposed software suites for this project besides git-annex:
  
* [[INTERNETARCHIVE.BAK/git-annex_implementation]]
 
 
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
 
* [[INTERNETARCHIVE.BAK/torrents_implementation]]
 
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]
 
* [[INTERNETARCHIVE.BAK/ipfs_implementation]]
 +
* [[INTERNETARCHIVE.BAK/iabak-sharp_implementation]]
 +
 +
* [[Valhalla]] - Valhalla was a discussion about the "ultimate home" for uploaded Archive Team data, be it the Internet Archive, elsewhere, or both. IA.BAK is an example of a potential shared home, although Archive Team projects have not generally been backed up using IA.BAK, yet.
  
== See Also ==
+
__NOTOC__
  
* [[Valhalla]]
+
{{Navigation box}}

Latest revision as of 11:42, 5 July 2020

Backing up the Internet Archive

The wonder of the Internet Archive's petabytes of stored data is that they're a world treasure, providing access to a mass of information and stored culture, gathered from decades of history (in some case, centuries), and available in a (relatively) easy-to-find fashion. And as media and popular websites begin to cover the Archive's mission in earnest, the audience is growing notably.

In each wave of interest, questions come forward out of the accolades:

  • Why is the Internet Archive only in one (actually, limited) physical space?
  • What is the disaster recovery plan?
  • What steps are being taken to ensure integrity and access?

Some of the answers are essays in themselves, but regarding the protection of the Internet Archive's storehouses of data, a project has been launched to add some real-world information to the theoretical consideration of data protection.

The INTERNETARCHIVE.BAK project (also known as IA.BAK or IABAK) is a combined experiment and research project to back up the Internet Archive's data stores, utilizing zero infrastructure of the Archive itself (save for bandwidth used in download) and, along the way, gain real-world knowledge of what issues and considerations are involved with such a project. Started in April 2015, the project already has dozens of contributors and partners, and has resulted in a fairly robust environment backing up terabytes of the Archive in multiple locations around the world.

Visit http://iabak.archiveteam.org to see the current status of the project's data storage and to learn how you can contribute disk space and bandwidth..

IA.BAK has been broken and unmaintained since about December 2016. The above status page is not accurate as of December 2019.

THE PHILOSOPHY

The project's main goals include:

  • Maintaining multiple, verified, geographically-disparate copies of Internet Archive data
  • No use of Internet Archive infrastructure, that is, machines or drives (bandwidth just for download)
  • Keep at least 3 external copies in three different physical locations from the Archive.
  • Conduct bi-weekly verification that the data is secure in the external locations.
  • Expire or remove data that has not been verified after 30 days, replacing it with functional space.

The project's secondary goals include:

  • Add ease of use to the end-user clients so the maximum amount of drive space contributors can participate.
  • Offer optional peer-to-peer data provision for both the main site's items and to speed synchronization.
  • Issue useful conclusions as to how the project is conducted.
  • Ultimately provide encrypted versions of some data so that internal/system data can be backed up too.

The project rests on several assumptions:

  • Given an opportunity and good feedback, there are petabytes of volunteer disk space available
  • The Internet Archive will gain more awareness by the general public by this project
  • A non-homogeneous environment for the data makes the data that much stronger and disaster-resistant
  • The project is an educational workshop as well as a good-faith effort, providing hard answers to theoretical questions.

WHAT WE HAVE LEARNED SO FAR (ABOUT THE DATA)

To understand what might be involved in backing up a large set of data, information about that dataset needed to be researched.

The project, known as the Internet Archive Census resulted in some solid numbers, as well as a set of facts that helped nail down the scope of the project. They include:

  • The Internet Archive has roughly 21 petabytes of unique data at this juncture. (Growing daily.)
  • Some is not available for direct download by the general public, so IA.BAK is not addressing it (yet).
  • Some data is, by its nature, more "critical" or "important" than others. Unique acquired data versus, say, universally available datasets or mirrors of prominent and accessible-from-many-locations music videos.
  • A lot of the data — the vast, vast majority — is extremely static, and meant to live forever.
  • A lot of the files are "derives", and marked as such - files that were derived from other files.
  • The Internet Archive reserves the right to delete uploaded content.

The census classified the total store of public-accessible, unique data as very roughly 14 petabytes. Of that data, there is some level of redundancy and there are collections that might be called "crown jewels" or treasures (the Prelinger Archives, the Bali Culture collection, collections of scanned books from partner libraries) while others are more "nice to have" (universally accessible podcasts or video blogs, multiple revisions of the same website backup).

WHAT WE HAVE DONE SO FAR (ABOUT THE IMPLEMENTATION)

Gitannex.png

There is a channel, #internetarchive.bak, on EFNet, for discussion about the project as well as bot-driven updates about code changes and new contributors. It is occasionally very active and mostly quiet in between new issues being handled. It is not required to use as a contributor of disk space, but is an excellent resource for questions.

After multiple discussions, the back-end for this phase of this project is using the git-annex suite of tools. Reasons include the maturity of the project and the willing contribution of time by the git-annex code designer and maintainer, Joey Hess (also a co-founder of Archive Team). Other possible software suites are listed at the end of this Wiki entry and might play a part in future revisions of the project.

To read the git-annex overview and proposal, go to: INTERNETARCHIVE.BAK/git-annex_implementation

To read the git-annex implementation documentation, go to: https://git-annex.branchable.com/design/iabackup/

CHOOSING WHICH DATA AT THE ARCHIVE TO BACK UP USING IA.BAK

As space is relatively limited to provide to IA.BAK, the project has had a second task line of choosing which public-facing data at the Internet Archive to push into IA.BAK. Generally, the group is looking for:

  • Data unique enough that it is predominantly at the Internet Archive
  • Data that does not flood the capacity of the project, i.e. hundreds of Terabytes
  • Data that does not generally limit which countries can host it
  • Data that is generally smaller in size, yet still unique

There is now a page to nominate sets of data at the Internet Archive that should be added to future shards:

The project status page has methods for seeing what collections are now in the project, although it lacks simple search (yet).

KEEPING TRACK OF THE DOWNLOAD AND BACKUP PROGRESS

Iabakpage.png

The IA.BAK project page, http://iabak.archiveteam.org/, provides a graphical overview of the progress of the project. Updated frequently, it shows total amount of space backed up, the status (red to green) of amount of backups of a given set of data, and timelines of how data is being acquired from the original sources. It also includes maps to show the geographical disparity of the clients, as well as how the clients are performing.

It is a continual challenge, and an ongoing one, to make navigation of these pages intuitive and informative. As the dataset grows in size and complexity, major changes to the layout of the pages will continue.

As with everything else, the infrastructure to maintain this page is completely separate from Internet Archive machines and resources.

CONCERNS

Since the beginning, there have been a number of concerns voiced about the project, and while most can't be glibly dismissed, it is good to at least acknowledge them (as they tend to take the first minutes of a conversation about the project).

  • Bad Actors will always be a continued concern, as person or persons determined to prevent the backing up of data could execute endless actions to slow down, corrupt, or ruin their contributions of space.
  • The Great Restore is the time when data from IA.BAK must return to its home to handle a physical or software-based failure of the Internet Archive's data stores. (This could be anything from an earthquake, fibre line cut, or to the dropping of an EMP). A restoring would definitely involve the leasing of a large amount of upload bandwidth and some amount of physical transfer.

See Also and Additional Information

Alternate proposed software suites for this project besides git-annex:

  • Valhalla - Valhalla was a discussion about the "ultimate home" for uploaded Archive Team data, be it the Internet Archive, elsewhere, or both. IA.BAK is an example of a potential shared home, although Archive Team projects have not generally been backed up using IA.BAK, yet.



v · t · e         Archive Team
Current events

Alive... OR ARE THEY · Deathwatch · Projects

Archiveteam.jpg
Archiving projects

APKMirror · Archive.is · BetaArchive · Government Backup (#datarefuge · ftp-gov· Gmane · Internet Archive · It Died · Megalodon.jp · OldApps.com · OldVersion.com · OSBetaArchive · TEXTFILES.COM · The Dead, the Dying & The Damned · The Mail Archive · UK Web Archive · WebCite · Vaporwave.me

Blogging

Blog.pl · Blogger · Blogster · Blogter.hu · Freeblog.hu · Fuelmyblog · Jux · LiveJournal · My Opera · Nolblog.hu · Open Diary · ownlog.com · Posterous · Powerblogs · Proust · Roon · Splinder · Tumblr · Vox · Weblog.nl · Windows Live Spaces · Wordpress.com · Xanga · Yahoo! Blog · Zapd

Cloud hosting/file sharing

aDrive · AnyHub · Box · Dropbox · Docstoc · Google Drive · Google Groups Files · iCloud · Fileplanet · LayerVault · MediaCrush · MediaFire · Mega · MegaUpload · MobileMe · OneDrive · Pomf.se · RapidShare · Ubuntu One · Yahoo! Briefcase

Corporations

Apple · IBM · Google · Loblaw · Lycos Europe · Microsoft · Yahoo!

Events

Arab Spring · Great Ape-Snake War · Spanish Revolution

Font Repos

DaFont · Google Web Fonts · GNU FreeFont · Fontspace

Forums/Message boards

4chan · Captain Luffy Forums · College Confidential · DSLReports · ESPN Forums · forums.starwars.com · HeavenGames · Invisionfree · NeoGAF · The Classic Horror Film Board · Yahoo! Messages · Yahoo! Neighbors · Yuku.com

Gaming

Atomicgamer · Bazaar.tf · City of Heroes · Club Nintendo · Counter-Strike: Global Offensive · CS:GO Lounge · Desura · Dota 2 · Dota 2 Lounge · Emulation Zone · ESEA · GameBanana · GameMaker Sandbox · GameTrailers · Halo · HLTV.org · HQ Trivia · Infinite Crisis · joinDOTA · League of Legends · Liquipedia · Minecraft.net · Player.me · Playfire · Raptr · Steam · SteamDB · SteamGridDB · Team Fortress 2 · TF2 Outpost · Warhammer · Xfire

Image hosting

500px · AOL Pictures · Blipfoto · Blingee · Canv.as · Camera+ · Cameroid · DailyBooth · Degree Confluence Project · deviantART · Demotivalo.net · Flickr · Fotoalbum.hu · Fotolog.com · Fotopedia · Frontback · Geograph Britain and Ireland · Giphy · GTF Képhost · ImageShack · Imgh.us · Imgur · Inkblazers · Instagram · Kepfeltoltes.hu · Kephost.com · Kephost.hu · Kepkezelo.com · Keptarad.hu · Madden GIFERATOR · MLKSHK · Microsoft Clip Art · Microsoft Photosynth · Nokia Memories · noob.hu · Odysee · Panoramio · Photobucket · Picasa · Picplz · Pixiv · Portalgraphics.net · PSharing · Ptch · puu.sh · Rawporter · Relay.im · ScreenshotsDatabase.com · Snapjoy · Streetfiles · Tabblo · Tinypic · Trovebox · TwitPic · Wallbase · Wallhaven · Webshots · Wikimedia Commons

Knowledge/Wikis

arXiv · Citizendium · Clipboard.com · Deletionpedia · EditThis · Encyclopedia Dramatica · Etherpad · Everything2 · infoAnarchy · GeoNames · GNUPedia · Google Books (Google Books Ngram· Horror Movie Database · Insurgency Wiki · Knol · Lost Media Wiki · Neoseeker.com · Notepad.cc · Nupedia · OpenCourseWare · OpenStreetMap · Orain · Pastebin · Patch.com · Project Gutenberg · Puella Magi · Referata · Resedagboken · SongMeanings · ShoutWiki · The Internet Movie Database · TropicalWikis · Uncyclopedia · Urban Dictionary · Urban Exploration Resource · Webmonkey · Wikia · Wikidot · WikiHow · Wikkii · WikiLeaks · Wikipedia (Simple English Wikipedia· Wikispaces · Wikispot · Wik.is · Wiki-Site · WikiTravel · Word Count Journal

Magazines/Blogs/News

Cyberpunkreview.com · Game Developer Magazine · Gigaom · Hardware Canucks · Helium · JPG Magazine · Make Magazine · Polygamia.pl · San Fransisco Bay Guardian · Scoop · Regretsy · Yahoo! Voices

Microblogging

Heello · Identi.ca · Jaiku · Mommo.hu · Plurk · Sina Weibo · Twitter · TwitLonger

Music/Audio

AOL Music · Audimated.com · Cinch · digCCmixter · Dogmazic.net · Earbits · exfm · Free Music Archive · Gogoyoko · Indaba Music · Instacast · Jamendo · Last.fm · Music Unlimited · MOG · PureVolume · Reverbnation · ShareTheMusic · SoundCloud · Soundpedia · This Is My Jam · TuneWiki · Twaud.io · WinAmp

People

Aaron Swartz · Michael S. Hart · Steve Jobs · Mark Pilgrim · Dennis Ritchie · Len Sassaman Project

Protocols/Infrastructure

FTP · Gopher · IRC · Usenet · World Wide Web
BitTorrent DHT

Q&A

Askville · Answerbag · Answers.com · Ask.com · Askalo · Baidu Knows · Blurtit · ChaCha · Experts Exchange · Formspring · GirlsAskGuys · Google Answers · Google Baraza · JustAnswer · MetaFilter · Quora · Retrospring · StackExchange · The AnswerBank · The Internet Oracle · Uclue · WikiAnswers · Yahoo! Answers

Recipes/Food

Allrecipes · Epicurious · Food.com · Foodily · Food Network · Punchfork · ZipList

Social bookmarking

Addinto · Backflip · Balatarin · BibSonomy · Bkmrx · Blinklist · BlogMarks · BookmarkSync · CiteULike · Connotea · Delicious · Designer News · Digg · Diigo · Dir.eccion.es · Evernote · Excite Bookmark · Faves · Favilous · folkd · Freelish · Getboo · GiveALink.org · Gnolia · Google Bookmarks · Hacker News · HeyStaks · IndianPad · Kippt · Knowledge Plaza · Licorize · Linkwad · Menéame · Microsoft Developer Network · myVIP · Mister Wong · My Web · Mylink Vault · Newsvine · Oneview · Pearltrees · Pinboard · Pocket · Propeller.com · Reddit · sabros.us · Scloog · Scuttle · Simpy · SiteBar · Slashdot · Squidoo · StumbleUpon · Twine · Vizited · Yummymarks · Xmarks · Yahoo! Buzz · Zootool · Zotero

Social networks

Bebo · BlackPlanet · Classmates.com · Cyworld · Dogster · Dopplr · douban · Ello · Facebook · Flixster · FriendFeed · Friendster · Friends Reunited · Gaia Online · Google+ · Habbo · hi5 · Hyves · iWiW · LinkedIn · Miiverse · mixi · MyHeritage · MyLife · Myspace · myVIP · Netlog · Odnoklassniki · Orkut · Plaxo · Qzone · Renren · Skyrock · Sonico.com · Storylane · Tagged · tvtag · Upcoming · Viadeo · Vine · Vkontakte · WeeWorld · Weibo · Wretch · Yahoo! Groups · Yahoo! Stars India · Yahoo! Upcoming · more sites...

Shopping/Retail

Alibaba · AliExpress · Amazon · Apple Store · Barnes & Noble · DirectCanada · eBay · Kmart · NCIX · Printfection · RadioShack · Sears · Sears Canada · Target · The Book Depository · ThinkGeek · Toys "R" Us · Walmart

Software/code hosting

Android Development · Alioth · Assembla · BerliOS · Betavine · Bitbucket · BountySource · Codecademy · CodePlex · Freepository · Free Software Foundation · GNU Savannah · GitHost  · GitHub · GitHub Downloads · Gitorious · Gna! · Google Code · ibiblio · java.net · JavaForge · KnowledgeForge · Launchpad · LuaForge · Maemo · mozdev · OSOR.eu · OW2 Consortium · Openmoko · OpenSolaris · Ourproject.org · Ovi Store · Project Kenai · RubyForge · SEUL.org · SourceForge · Stypi · TestFlight · tigris.org · Transifex · TuxFamily · Yahoo! Downloads

Television/Radio

ABC · Austin City Limits · BBC · CBC · CBS · Computer Chronicles · CTV · Fox · G4 · Global TV · Jeopardy! · NBC · NHK · PBS · Penn & Teller: Bullshit! · The Howard Stern Show · TV News Archive (Understanding 9/11)

Torrenting/Piracy

ExtraTorrent · EZTV · isoHunt · KickassTorrents · The Pirate Bay · Torrentz · Library Genesis

Video hosting

Academic Earth · Bambuser · Blip.tv · Epic · Google Video · Justin.tv · Niconico · Nokia Trailers · Oddshot.tv · Plays.tv · Qwiki · Skillfeed · Stickam · TED Talks · Ticker.tv · Twitch.tv · Ustream · Videoplayer.hu · Viddler · Viddy · Vidme · Vimeo · Vine · Vstreamers · Yahoo! Video · YouTube · Famous Internet videos (Me at the zoo)

Web hosting

Angelfire · Brace.io · BT Internet · CableAmerica Personal Web Space · Claranet Netherlands Personal Web Pages · Comcast Personal Web Pages · Extra.hu · FortuneCity · Free ProHosting · GeoCities (patch· Google Business Sitebuilder · Google Sites · Internet Centrum · MBinternet · MSN TV · Nifty · Nwnyet · Parodius Networking · Prodigy.net · Saunalahti Iso G · Swipnet · Telenor · Tripod · University of Michigan personal webpages · Verizon Mysite · Verizon Personal Web Space · Webzdarma · Virgin Media

Web applications

Mailman · MediaWiki · phpBB · Simple Machines Forum · vBulletin

Information

A Million Ways to Die on the Web · Backup Tips · Cheap storage · Collecting items randomly · Data compression algorithms and tools · Dev · Discovery Data · DOS Floppies · Fortress of Solitude · Keywords · Naughty List · Nightmare Projects · Rescuing floppy disks · Rescuing optical media · Site exploration · The WARC Ecosystem · Working with ARCHIVE.ORG

Projects

ArchiveCorps · Audit2014 · Emularity · Faceoff · FlickrFckr · Froogle · INTERNETARCHIVE.BAK (Internet Archive Census· IRC Quotes · JSMESS · JSVLC · Just Solve the Problem · NewsGrabber · Project Newsletter · Valhalla · Web Roasting (ISP Hosting · University Web Hosting· Woohoo

Tools

ArchiveBot · ArchiveTeam Warrior (Tracker· Google Takeout · HTTrack · Video downloaders · Wget (Lua · WARC)

Teams

Bibliotheca Anonoma · LibreTeam · URLTeam · Yahoo Video Warroom · WikiTeam

Other

800notes · AOL · Akoha · Ancestry.com · April Fools' Day · Amplicate · AutoAdmit · Bre.ad · Circavie · Cobook · Co.mments · Countdown · Discourse · Distill · Dmoz · Easel · Eircode · Electronic Frontier Foundation · FanFiction.Net · Feedly · Ficlets · Forrst · FunnyExam.com · FurAffinity · Google Helpouts · Google Moderator · Google Reader · ICQmail · IFTTT · Jajah · JuniorNet · Lulu Poetry · Mobile Phone Applications · Mochi Media · Mozilla Firefox · MyBlogLog · NBII · Neopets · Quantcast · Quizilla · Salon Table Talk · Shutdownify · Slidecast · Stack Overflow · SOPA blackout pages · starwars.yahoo.com · TechNet · Toshiba Support · USA-Gov · Volán · Widgetbox · Windows Technical Preview · Wunderlist · YTMND · Zoocasa

About Archive Team

Introduction · Philosophy · Who We Are · Our stance on robots.txt · Why Back Up? · Software · Formats · Storage Media · Recommended Reading · Films and documentaries about archiving · Talks · In The Media · FAQ