archive.pl – a Perl script for archiving URL sets in the Internet Archive

Introduction

archive.pl lets you collect URLs in a text file and stores them in the Internet Archive. It fetches the documents itself and scraps some metadata in order to generate a link list in HTML that is suitable for posting it to a blog or as Atom feed. Windows users, who lack Perl on their machine, can obtain it as exe-file.

Table of contents

Requirements

Perl 5.24 (earlier versions not tested but it is likely to work with every build that is capabale of getting the required modules installed). If there are issues with installing the XMLRPC::lite module, do it with CPAN’s notest pragma.

Table of contents

Usage

Collect URLs you want to archive in file urls.txt separated by line breaks and UTF-8-encoded and call perl archive.pl without arguments. The script does to things: it fetches the URLs and extracts some metadata (works with HTML and PDF). It submits them to Internet Archive by opening them in a browser or via wget or PowerShell. This is necessary because Internet Archive blocks robots globally. Then it generates a HTML file with a link list that you may post to your blog. Alternatively you can get the link list as Atom feed. Additionally you can post the links on Twitter. Regardless of the format you can upload the file on a server via FTP. If the archived URL points to an image, a thumbnail is viewed in the output file. Optional parameters available:

-a                              output as Atom feed instead of HTML
-c <creator>                    name of feed creator (feed only)
-d <path>                       FTP path
-D                              Debug mode - don't save to Internet Archive
-f <filename>                   name of input file if other than `urls.txt`
-h                              Show commands
-i <title>                      Feed or HTML title
-k <consumer key>               Twitter consumer key
-n <username>                   FTP or WordPress user
-o <host>                       FTP host
-p <password>                   FTP or WordPress password
-r                              Obey robots.txt
-s                              Save feed in Wayback machine (feed only)
-t <access token>               Twitter access token
-T <seconds>                    delay per URL in seconds to respect IA's request limit
-u <URL>                        Feed or WordPress (xmlrpc.php) URL
-v                              version info
-w                              *deprecated*
-x <secret consumer key>        Twitter secret consumer key
-y <secret access token>        Twitter secret access token
-z <time zone>                  Time zone (WordPress only)

Table of contents

Changelog

v2.0

  • Script can save all linked URLs, too (IA has restricted this service to logged-in users running JavaScript).
  • Debug mode (does not save to IA).
  • WordPress bug fixed (8-Bit ASCII in text lead to database error).
  • Ampersand bug in ‘URL available’ request fixed.
  • Trim metadata.
  • Disregard robots.txt by default.

v1.8

  • Post HTML outfile to WordPress
  • Wayback machine saves all documents linked in the URL if it is HTML (Windows only).
  • Time delay between processing of URLs because Internet Archive set up a request limit.
  • Version and help switches.

v1.7

  • Tweet URLs.
  • Enhanced handling of PDF metadata.
  • Always save biggest Twitter image.

Table of contents

v1.6

Not published.

Table of contents

v1.5

  • Supports wget and PowerShell (w flag).
  • Displays the closest Wayback copy date.
  • Better URL parsing.
  • Windows executable only 64 bit since not all modules install properly on 32.

Table of contents

v1.4

  • Enhanced metadata scraping.
  • Archive images from Twitter in different sizes.
  • Added project page link to outfile.
  • Remove UTF-8 BOM from infile.
  • User agent avoids strings archiv and wayback.
  • Internet Archive via TLS URL.
  • Thumbnail if URL points to an image.

Table of contents

v1.3

  • Debugging messages removed.
  • Archive.Org URL changed.

Table of contents

v1.2

  • Internationalized domain names (IDN) allowed in URLs.
  • Blank spaces allowed in URLs.
  • URL list must be in UTF-8 now!
  • Only line breaks allowed as list separator in URL list.

Table of contents

v1.1

  • Added workaround for Windows ampersand bug in Browser::Open (ticket on CPAN).

Table of contents

License

Copyright © 2015–2020 Ingram Braun
GPL 3 or higher.

Table of contents

Download

or clone it from GitHub:

$ git clone https://github.com/CarlOrff/archive.git

Table of contents

archive.pl – a Perl script for archiving URL sets in the Internet Archive 1

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment