NewsGrabber

From Archiveteam
Jump to navigation Jump to search
Archiveteam1.png This project has been superseded by URLs, details below are currently kept for historical purposes.
NewsGrabber - Archiving all the world's news!
Newssites.png
URL https://wiki.newsbuddy.net/Main_Page
Status Special case
Archiving status Project superseded by URLs
Archiving type DPoS
Project source NewsGrabber-Warrior
Project tracker newsgrabber
IRC channel #newsgrabber (on hackint)
Data[how to use] archiveteam_newssites

NewsGrabber is a project to save as many news articles as possible from as many websites as possible. This project has been superseded by URLs.

Why

A lot of news articles are saved in the Wayback Machine by the Internet Archive. They're mostly saved via the Top News collection (part of the Focused Crawls), the crawls of GDELT URLs, International News Crawls and through general Wide Crawls.

We think these crawls aren't very complete crawls of the world's news articles.

  • Crawls on websites from Top News in Focused Crawls crawl a website from the seed URL up to 5 layers deep. This is done once a day. Not a lot of websites are covered and the websites covered are mostly in English.
  • GDELT does a very good job of covering news from around the world, but sometimes misses more local websites and many non-English websites.
  • Wide Crawls are focused on the whole World Wide Web, not on news articles.

NewsGrabber

NewsGrabber is written to solve the problem of missing news articles. NewsGrabber contains a database of URLs to be checked for articles, and allows anyone to add new websites to the database. Multiple seed URLs can be added for each website entry, all of which will be crawled periodically to look for new article URLs. yt-dlp can be used to download article URLs, making it possible to preserve news in video-form just as well as news in text-form.

How can I help?

There are two ways you can help:

  • There are uncountably many news websites in the world. Make our list more complete! Add newssites to our database, see in the next section or here how.
  • Donate server power for the grabbing process. Join #newsgrabber (on hackint) and find user HCross.

Options

NewsGrabber handles several options for discovering and processing URLs. More details about using these options to add new websites can be found in the README of the project [1].

Refreshtime

The refreshtime is the time NewsGrabber waits before crawling the seed URLs of the website. The refreshtime can be as low as 5 seconds.

SeedURLs

SeedURLs are used by NewsGrabber to discover new articles. Often not all newsarticles are displayed on the front page of a website. They may be spread over several sections or rss feeds. A list of these URLs can be given to NewsGrabber so all articles from these URLs are found and not only those on the frontpage.

Videos

NewsGrabber supports videos. URLs containing videos are downloaded with yt-dlp, if they match the regex given for videoURLs.

Livepages

In case of big events a live page is often created with the latest news on this event. NewsGrabber will grab URLs matching the regex given for liveURLs over and over, and will crawl every new bit of information on them.

Immediate grab

When emergencies happen newssites try to cover as much as they can about what is happening. Often false information is published on the websites and later removed. To catch articles that are later removed it is possible to make NewsGrabber grab new found articles immediately instead of adding them to the list, which will be grabbed once every hour.

Lists to include