High-quality photo sharing & selling site
|URL||• • •|
|Project status||Online!, but Creative Commons-licensed images Endangered|
|Archiving status||CC-licensed images Saved!|
500px is a photo sharing site, that caters to high-quality photos. It provides ways to photographers to sell their images, as well as providing a large collection of images to view.
Creative Commons image massacre
500px announced that they would be no longer directly licensing images through 500px Marketplace, in favor of outsourcing distribution duties to Getty Images (and Visual China Group inside of China). One consequence of this is that all Creative Commons-licensed images, as well as images where users have opted out of distribution will disappear by June 30, 2018.
Start Your Warriors
Some images have already been archived, and a Warrior project is underway to find and save more. Check your settings to ensure that the 'maximum number of simultaneous items' is set to 1 - if you exceed this you run the risk of 500px banning your IP for a period of time.
If you want to run more warriors (or more concurrent scripts) you'll need more IP addresses. If you can help, please do! Discussion is in.
Example of one of the responses: https://pastebin.com/TygNSTSu
Unfortunately, the official API is dead - but fear not fellow Archivists! I have found a (admittedly bodged together) method of getting API info: Using the BurpSuite Pro Network Security tools, I set up a MITM attack in between a VM with a custom SSL CA certificate installed and the server. (It seems that 500px use a combination of browser cookies, a device uuid, and a few other keys to block widespread use of their API). After intercepting a request to api.500px.com, I cloned the request and sent it to the "Intruder" Tool, where I set the page string in the GET request to the API as a 'payload', then had it auto-increment numbers while processing the requests and saving the responses. I set the limit to be 1000, although I ended up stopping it at around 900 because I noticed the responses were turning empty (and theres a total pages number in the api info). I 7zipped all of the responses and threw them up on the IA.
I also had a go at writing a python script that once given a list of URLs, it would parse and download all of the metadata and photos from those URLS: https://github.com/adinbied/500pxBU
We found a sitemap via robots.txt that was initially ignored due to its small size. When decompressed, however, the XML document points at ~10k other compressed XML sitemap documents, enumerating all photos on the site with limited metadata attached including the license. We filtered for CC licenses and used this to generate new items.