Difference between revisions of "Software"

From Archiveteam
Jump to navigation Jump to search
(proposals)
(41 intermediate revisions by 20 users not shown)
Line 1: Line 1:
__NOTOC__
__NOTOC__
== WARC Tools ==
[[The WARC Ecosystem]] has information on tools to create, read and process WARC files.
== General Tools ==
== General Tools ==


* [[Wget|GNU WGET]]
* [[Wget|GNU WGET]]
* [http://curl.haxx.se/ cURL]
** Backing up a Wordpress site: "wget --no-parent --no-clobber --html-extension --recursive --convert-links --page-requisites --user=<username> --password=<password> <path>"
* [http://www.httrack.com/ HTTrack]
* [https://curl.haxx.se/ cURL]
* [http://crawler.archive.org/ Heritrix] -- what archive.org use
* [https://www.httrack.com/ HTTrack] - [[HTTrack options]]
* [http://pavuk.sourceforge.net/ Pavuk] -- a bit flaky, but very flexible
* [http://pavuk.sourceforge.net/ Pavuk] -- a bit flaky, but very flexible
* [https://github.com/oduwsdl/warrick Warrick] - Tool to recover lost websites using various online archives and caches.
* [https://www.crummy.com/software/BeautifulSoup/ Beautiful Soup] - Python library for web scraping
* [https://scrapy.org/ Scrapy] - Fast python library for web scraping
* [https://github.com/JustAnotherArchivist/snscrape snscrape] - Tool to scrape social networking services.
* [https://splinter.readthedocs.io/ Splinter] - Web app acceptance testing library for Python -- could be used along with a scraping lib to extract data from hard-to-reach places
* [https://sourceforge.net/projects/wilise/ WiLiSe] '''Wi'''ki'''Li'''nk '''Se'''arch - Python script to get links to specific pages of a site through the search in a Wiki ([[wikipedia:MediaWiki|MediaWiki]]-type) has the [http://www.mediawiki.org/wiki/Api.php api.php] accessible or [http://www.mediawiki.org/wiki/Extension:LinkSearch extension LinkSearch] enabled (the project is still very immature and at the moment the code is only available in [http://sourceforge.net/p/wilise/code/1/tree/code/trunk/ this SVN repository]).
* [[Mobile Phone Applications]] -- some notes on preserving old versions of mobile apps
== Hosted tools ==
* [https://pinboard.in/ Pinboard] is a convenient social bookmarking service that will [http://pinboard.in/blog/153/ archive copies of all your bookmarks] for online viewing. The catch is that it costs $11/year, or $25/year if you want the archival feature and you can only download archives of your 25 most recent bookmarks in a particular category. This may pose problems if you ever need to get your data out in a hurry.
* [https://freeyourstuff.cc/ freeyourstuff.cc] -- Extensible open-source ([https://github.com/eloquence/freeyourstuff.cc source]) Chrome plugin allowing users to export their own content (reviews, posts, etc.). Exports to JSON format, optionally publish to freeyourstuff.cc & mirrors under Creative Commons CC0 license. Supports Yelp, [[IMDB]], TripAdvisor, [[Amazon]], GoodReads, and [[Quora]] as of July 2019.


=== LiveJournal Recovery ===
== Site-Specific ==


LiveJournal does not currently have an export function.
* [[Google]]
* [[Livejournal]]
* [[Twitter]]
* [http://code.google.com/p/somaseek/ SomaFM]
* https://www.allmytweets.net/ - Download the last 3,200 tweets from any user.


* [http://www.offtopia.net/ljsm/index_en.html ljsm]
== Format Specific ==
* [http://heinous.org/wiki/LiveJournal_XML_Export_Script Livejournal Export Script] - Pull Livejournal into a database (GDBM), allowing export into HTML or XML, and further import into [http://www.wordpress.org Wordpress] or other blog software.
* [http://fawx.com/software/ljarchive/ ljarchive]
* [http://www.ljbook.com/frontpage.php ljbook] ''(currently overloaded)'' - Web interface exports LJ to a book, with images and other options. Limited use per month for unpaid users.
* [http://hewgill.com/ljdump/ ljdump] - Python script that slurps everything down into a pile of XML files.


=== Gmail ===
* [http://www.shlock.co.uk/Utils/OmniFlop/OmniFlop.htm OmniFlop]


* [http://www.gmail-backup.com/ Gmail Backup]
== Proposed ==


=== FaceBook Recovery ===
* [https://solidproject.org/ Solid project] attempts to make data portability a reality
* [https://datatransferproject.dev/ Data transfer project] is a (promise of) a quick implementation of [[wikipedia:GDPR|GDPR]] data portability by the [[wikipedia:GAFA|GAFA]] + Twitter


Facebook does not currently have an export function.
== Web scraping ==


=== FriendFeed ===
* See [[Site exploration]]


* [http://github.com/lmorchard/friendfeedarchiver/ FriendFeed Archiver] - not that FriendFeed is on [[Deathwatch]], but someone might be interested
{{Navigation pager
| previous = Why Back Up?
| next = Formats
}}
{{Navigation box}}


=== Flickr ===
[[Category:Tools| ]]
* [http://sunkencity.org/flickredit Flickr Edit] includes Flickr Backup

Revision as of 05:20, 4 December 2019

WARC Tools

The WARC Ecosystem has information on tools to create, read and process WARC files.

General Tools

  • GNU WGET
    • Backing up a Wordpress site: "wget --no-parent --no-clobber --html-extension --recursive --convert-links --page-requisites --user=<username> --password=<password> <path>"
  • cURL
  • HTTrack - HTTrack options
  • Pavuk -- a bit flaky, but very flexible
  • Warrick - Tool to recover lost websites using various online archives and caches.
  • Beautiful Soup - Python library for web scraping
  • Scrapy - Fast python library for web scraping
  • snscrape - Tool to scrape social networking services.
  • Splinter - Web app acceptance testing library for Python -- could be used along with a scraping lib to extract data from hard-to-reach places
  • WiLiSe WikiLink Search - Python script to get links to specific pages of a site through the search in a Wiki (MediaWiki-type) has the api.php accessible or extension LinkSearch enabled (the project is still very immature and at the moment the code is only available in this SVN repository).
  • Mobile Phone Applications -- some notes on preserving old versions of mobile apps

Hosted tools

  • Pinboard is a convenient social bookmarking service that will archive copies of all your bookmarks for online viewing. The catch is that it costs $11/year, or $25/year if you want the archival feature and you can only download archives of your 25 most recent bookmarks in a particular category. This may pose problems if you ever need to get your data out in a hurry.
  • freeyourstuff.cc -- Extensible open-source (source) Chrome plugin allowing users to export their own content (reviews, posts, etc.). Exports to JSON format, optionally publish to freeyourstuff.cc & mirrors under Creative Commons CC0 license. Supports Yelp, IMDB, TripAdvisor, Amazon, GoodReads, and Quora as of July 2019.

Site-Specific

Format Specific

Proposed

Web scraping

Why Back Up?SoftwareFormats