archive.pl lets you collect URLs in a text file and stores them in the Internet Archive. It fetches the documents itself and scraps some metadata in order to generate a link list in HTML that is suitable for posting it to a blog or as Atom feed. Windows users, who lack Perl on their machine, can obtain it as exe-file.
Perl 5.24 (earlier versions not tested but it is likely to work with every build that is capabale of getting the required modules installed).
Collect URLs you want to archive in file urls.txt separated by line breaks and UTF-8-encoded and call perl archive.pl without arguments. The script does to things: it fetches the URLs and extracts some metadata (works with HTML and PDF). It submits them to Internet Archive by opening them in a browser. This is necessary because Internet Archive blocks robots globally. Then it generates a HTML file with a link list that you may post to your blog. Alternatively you can get the link list as Atom feed. Regardless of the format you can upload the file on a server via FTP. Optional parameters available:
- Enhanced metadata scraping.
- Archive images from Twitter in different sizes.
- Added project page link to outfile.
- Remove UTF-8 BOM from infile.
- User agent avoids strings archiv and wayback.
- Internet Archive via TLS URL.
- Thumbnail if URL points to an an image.
- Debugging messages removed.
- Archive.Org URL changed.
- Internationalized domain names (IDN) allowed in URLs.
- Blank spaces allowed in URLs.
- URL list must be in UTF-8 now!
- Only line breaks allowed as list separator in URL list.
- Added workaround for Windows ampersand bug in Browser::Open (ticket on CPAN).
Copyright © – Ingram Braun
GPL 3 or higher.
or clone it from GitHub: