Difference between revisions of "Blogger"

From Archiveteam
Jump to navigation Jump to search
m
(+category)
(10 intermediate revisions by 7 users not shown)
Line 6: Line 6:
| URL = http://www.blogger.com/
| URL = http://www.blogger.com/
| project_status = {{online}}
| project_status = {{online}}
| archiving_status = {{inprogress}}
| archiving_status = {{notsavedyet}}
| source = [https://github.com/ArchiveTeam/blogger-discovery blogger-discovery]
| source = [https://github.com/ArchiveTeam/blogger-discovery blogger-discovery]
| tracker = [http://tracker.archiveteam.org/bloggerdisco/ bloggerdisco]
| tracker = [http://tracker.archiveteam.org/bloggerdisco/ bloggerdisco]
Line 12: Line 12:
}}
}}


'''Blogger''' is a blog hosting service. On February 23, 2015, they announced that "sexually explicit" blogs would be restricted from public access in a month. But soon they withdrew their plan, and said they wouldn't change their existing policies.<ref>https://support.google.com/blogger/answer/6170671?p=policy_update&hl=en&rd=1</ref> However, before that, we had decided to downloading everything.
'''Blogger''' is a blog hosting service. On February 23, 2015, they announced that "sexually explicit" blogs would be restricted from public access in a month. But soon they withdrew their plan, and said they wouldn't change their existing policies.<ref>https://support.google.com/blogger/answer/6170671?p=policy_update&hl=en&rd=1</ref> With the exception of removing [[Google+]] comments, there hasn't been an update on the [https://blogger.googleblog.com/ official blog] in around one year.  Google has also moved away from Blogger for their own company blogs.  For these reasons, Blogger is at risk of shutting down.
 
'''ArchiveTeam did a discovery between February and May 2015, but actual content has not been downloaded yet.'''


== Strategy ==
== Strategy ==


Find as many http://foobar.blogspot.com domains as possible and download them. Blogs often link to other blogs, which will help, so each individual blog saved will help discover others. Also a small-scale crawl of Blogger profiles (e.g. <nowiki>http://www.blogger.com/profile/{random number up to 35217655}</nowiki>) will provide links to blogs authored by each user (e.g. https://www.blogger.com/profile/5618947 links to http://hintergedanke.blogspot.com/) - Although note that this does not cover ALL bloggers or ALL blogs, and is merely a starting point for further discovery.
Find as many http://foobar.blogspot.com domains as possible and download them. [https://archive.org/details/all_blogger_subdomains Here is a full list] (as of February 2020) and a new list can be generated with [[User:Trumad|these instructions]]. Otherwise, manual discovery can be attempted. Blogs often link to other blogs, which will help, so each individual blog saved will help discover others. Also a small-scale crawl of Blogger profiles (e.g. <nowiki>http://www.blogger.com/profile/{random number up to 35217655}</nowiki>) will provide links to blogs authored by each user (e.g. https://www.blogger.com/profile/5618947 links to http://hintergedanke.blogspot.com/) - Although note that this does not cover ALL bloggers or ALL blogs, and is merely a starting point for further discovery.


== Country Redirect ==
== Country Redirect ==
Line 41: Line 43:
Add this to a blog url and it will download the most recent 499 posts (that is the limit): /atom.xml?redirect=false&max-results=
Add this to a blog url and it will download the most recent 499 posts (that is the limit): /atom.xml?redirect=false&max-results=


== How can I help? ==
== Your own blogs ==
 
=== Running the Warrior ===
Start up the [[Warrior]] and select the ''Blogger Discovery'' project. '''Do not''' increase the default concurrency of 2, because Google limits requests aggressively (and you get blocked for ~45 minutes, maybe less). Moreover, if you see "503 Service Unavailable" messages, decrease concurrency to 1.
 
=== Running the script manually ===
See details here: http://github.com/ArchiveTeam/blogger-discovery
 
'''Do not''' increase the concurrency above 2, because Google limits requests aggressively (and you get blocked for ~45 minutes, maybe less). Moreover, if you see "503 Service Unavailable" messages, decrease concurrency to 1.
 
=== Speeding things up ===
 
<h4>Disclaimer</h4>
'''The following method is not ArchiveTeam's official recommendation. ''You'' are solely responsible for any consequences of using this abusive method.'''
 
Solving Google's captcha, and using the resulting cookie, the request limit doesn't apply for three hours, that is, one can hammer Blogger as intensely as they like.
 
You can find a modified <code>discover.py</code> script [http://paste.archivingyoursh.it/foleloboya.py here], which uses a cookies file, and has decreased sleep time if you encounter a captcha (so that you don't need to wait 45 minutes if you have the cookie), and lacks the sleep between requests at all. Replace the original <code>discover.py</code> script with this.
 
The other thing, that you have to do in every three hours, is:
* When you see the script bump into a captcha, go to http://blogger.com/ and solve it
* Export your cookies with some tool, e.g. for Firefox there is [https://addons.mozilla.org/hu/firefox/addon/export-cookies/?src=api this] extension. Save the file as <code>cookies.txt</code> into the folder where <code>discover.py</code> resides.


In case you want to renew the cookie before the three hours expire, find and delete the cookie named <code>GOOGLE_ABUSE_EXEMPTION</code> in your browser, and then do the things above. '''Note:''' changing the cookie's expiry date doesn't have effect.
Download them at https://takeout.google.com/settings/takeout


'''DO NOT leave this script alone without solving the captcha latest right after the expiry of the cookie, otherwise items will be garbaged continuously.''' If you have to leave the script alone, schedule its stop by issuing the <code>sleep 10400; touch STOP</code> command in its folder right when renewing the cookie. (This will stop the script's operation after 3 hours; when you want to restart the script, issue <code>rm STOP</code> beforehand.)
We've not tested whether the output is suitable for importing in any other software such as Wordpress.


== External links ==
== External links ==
Line 77: Line 58:


[[Category:Google]]
[[Category:Google]]
[[Category:Blogging]]

Revision as of 13:46, 20 August 2020

Blogger
Blogger logo
Blogger- Crea tu blog gratuito 1303511108785.png
URL http://www.blogger.com/
Status Online!
Archiving status Not saved yet
Archiving type Unknown
Project source blogger-discovery
Project tracker bloggerdisco
IRC channel #frogger (on hackint)

Blogger is a blog hosting service. On February 23, 2015, they announced that "sexually explicit" blogs would be restricted from public access in a month. But soon they withdrew their plan, and said they wouldn't change their existing policies.[1] With the exception of removing Google+ comments, there hasn't been an update on the official blog in around one year. Google has also moved away from Blogger for their own company blogs. For these reasons, Blogger is at risk of shutting down.

ArchiveTeam did a discovery between February and May 2015, but actual content has not been downloaded yet.

Strategy

Find as many http://foobar.blogspot.com domains as possible and download them. Here is a full list (as of February 2020) and a new list can be generated with these instructions. Otherwise, manual discovery can be attempted. Blogs often link to other blogs, which will help, so each individual blog saved will help discover others. Also a small-scale crawl of Blogger profiles (e.g. http://www.blogger.com/profile/{random number up to 35217655}) will provide links to blogs authored by each user (e.g. https://www.blogger.com/profile/5618947 links to http://hintergedanke.blogspot.com/) - Although note that this does not cover ALL bloggers or ALL blogs, and is merely a starting point for further discovery.

Country Redirect

Accessing http://whatever.blogspot.com will usually redirect to a country-specific subdomain depending on your IP address (e.g. whatever.blogspot.co.uk, whatever.blogspot.in, etc) which in some cases may be censored or edited to meet local laws and standards - this can be bypassed by requesting http://whatever.blogspot.com/ncr as the root URL.[2] [3]

Downloading a single blog with Wget

These Wget parameters can download a BlogSpot blog, including comments and any on-site dependencies. It should also reject redundant pages such as the /search/ directory and any multiple occurrences of the same page but with different query strings. It has only be tested on blogs using a Blogger subdomain (e.g. http://foobar.blogspot.com), not custom domains (e.g. http://foobar.com). Both instances of [URL] should be replaced with the same URL. A simple Perl wrapper is available here.

wget --recursive --level=2 --no-clobber --no-parent --page-requisites --continue --convert-links --user-agent="" -e robots=off --reject "*\\?*,*@*" --exclude-directories="/search,/feeds" --referer="[URL]" --wait 1 [URL]

UPDATE:

Use this improved bash script instead, in order to bypass the adult content confirmation. BLOGURL should be in http://someblog.blogspot.com format.

#!/bin/bash
blogspoturl="BLOGURL"
wget -O - "blogger.com/blogin.g?blogspotURL=$blogspoturl" | grep guestAuth | cut -d'"' -f 4 | wget -i - --save-cookies cookies.txt --keep-session-cookies
wget --load-cookies cookies.txt --recursive --level=2 --no-clobber --no-parent --page-requisites --continue --convert-links --user-agent="" -e robots=off --reject "*\\?*,*@*" --exclude-directories="/search,/feeds" --referer="$blogspoturl" --wait 1 $blogspoturl

Export XML trick

Add this to a blog url and it will download the most recent 499 posts (that is the limit): /atom.xml?redirect=false&max-results=

Your own blogs

Download them at https://takeout.google.com/settings/takeout

We've not tested whether the output is suitable for importing in any other software such as Wordpress.

External links

References