Difference between revisions of "Photobucket"

From Archiveteam
Jump to navigation Jump to search
(Linked outage)
m (link to howto)
Line 29: Line 29:
== External links ==
== External links ==
* http://photobucket.com
* http://photobucket.com
* https://www.thephotoforum.com/threads/how-to-get-around-photobuckets-watermarks-on-your-images.438126/ - how to modify URLs in order to archive images without watermarks.


{{Navigation box}}
{{Navigation box}}


[[Category:Image hosting]]
[[Category:Image hosting]]

Revision as of 13:30, 14 May 2020

Photobucket
The Photobucket home page as seen on 2011-01-12
The Photobucket home page as seen on 2011-01-12
URL http://photobucket.com
Status Online!
Archiving status Not saved yet
Archiving type Unknown
IRC channel #archiveteam-bs (on hackint)

Photobucket is an image hosting service. They are also hosting Twitter images.

Vital signs

Seems stable. Had a major outage though around year turnover from 2019 to 2020: https://thedeadpixelssociety.com/power-outage-impacts-photobucket-over-key-holiday-period/

Archiving Photobucket

PB_Shovel can be used to download all images from an album and all subalbums (recursively).

This version of PB_Shovel has also been modified with a `--links-only` option to crawl a list of every single URLs from the album and subalbums instead of downloading. This makes it possible to use wget to grab all the files yourself, or grab-site to archive with WARC.

Usage: Obtain all the urls of a Photobucket Album, even subalbums (using the -r parameter), and put them in links-<datetime>.txt. This url file can be given to wget or grab-site to download.

   python pb_shovel.py 'http://s160.photobucket.com/user/Spinningfox/library/Internet Fads' -r --links-only
   wget -i links-2016-03-20_02-03-01.txt
   grab-site -i links-2016-03-20_02-03-01.txt


External links