I saw this post and I was curious what was out there.
https://neuromatch.social/@jonny/113444325077647843
Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?
I archive youtube videos that I like with TubeArchivist, I have a playlist for random videos i’d like to keep, and also subscribe to some of my favourite creator so I can keeptheir videos, even when I’m offline
I’ll add pinchflat as an alternative with the same aim.
Seems nice, but you need an external Player to watch the content, which can be goof for some people, but I like the webUI of TubeArchivist (even though it can be enhanced for sure)
they have an automatic VM that dowloads stuff in distributed manner and uploads to archive.org
archive.org is hosted in the US and could end up being a valid target. It doesn’t strike me as being a very good place to securely store anything nowadays. I’d consider anything hosted in the US to be out.
One option that I’ve heard of in the past
ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.
This seems pretty cool. I might actually host this.
That looks useful, I might host that. Does anyone have an RSS feed of at risk data?
I am using archivebox, it is pretty straight-forward to self-host and use.
However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.
If anyone else has more success using it, please let me know if I am doing something wrong…
Eyy, I want that!
Going to check that out because…yeah. Just gotta figure out what and where to archive.
Flash drives and periodic transfers.
I don’t self-host it, I just use archive.org. That makes it available to others too.
It’s a single point of failure though.
In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.
Yes. This isn’t something you want your own machines to be doing if something else is already doing it.
But then who backs up the backups?
Realize how how much they are supporting and storing.
Come back to the comments after.
I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).
Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.
There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?
I use M-Discs to long term archival.
I heard news recently that some companies recently started shipping non-m disks labelled as m-disks. You may want to have a look
NOAA is at risk I think.
Everything is at risk.
Linkding/Linkwarden
I have a script that archives to:
- Internet Archive: Digital Library of Free & Borrowable Texts, Movies, Music & Wayback Machine
- Webpage archive
- Ghostarchive, a website archive
- Self-hosted https://archivebox.io/
I used to solely depend on archive.org, but after the recent attacks, I expanded my options.
Script: https://gist.github.com/YasserKa/9a02bc50e75e7239f6f0c8f04fe4cfb1
EDIT: Added script. Note that the script doesn’t include archiving to archivebox, since its API isn’t available in stable verison yet. You can add a function depending on your setup. Personally, I am depending on Caddy and docker, so I am using caddy module [1] to execute commands with this in my
Caddyfile
:route /add { @params query url=* exec docker exec --user=archivebox archivebox archivebox add {http.request.uri.query.url} { timeout 0 } }
Would you be willing to share it?
Sure.