I download lots of media files. So far I have been storing these files after I am done with them on a 2TB hard disk. I have been copying over the files with rsync. This has so far worked fairly well. However the hard disk I am using is starting to get close to full. Now I am needing to find a solution so I can span my files over multiple disks. If I were to continue to do it as I do now, I would end up copying over files that would already be on the other disk. Does the datahoading community have any solutions to this?

For more information, my system is using Linux. The 2TB drive is formatted with ext4. When I make the backup to the drive I use ’rsync -rutp’. I don’t use multiple disks at the same time due to having only one usb sata enclosure for 3 1/2 inch disks. I don’t keep the drive connected all the time due to not needing it all the time. I keep local copies until I am done with the files (and they are backed up).

  • Dragonish@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    Welcome! Without buying more enclosures and increasing the number of drives you can access at one time, you will need to partition your files based on your own use case and maintain an index so that you easily can retrieve the right drive when you need to access data. Perhaps you get a drive for each year. Perhaps images go to one and video to another. perhaps you split on the file name. For an index, this can be as simple as labeling the drives and putting them on your shelf. As mentioned by others, there are software solutions for indexing file metadata as well.

    If you buy more enclosures you can use MergerFS or another union file system to bring both disks together and provide a single view while using ext4 for each drive. This allows you to easily remove a single drive and plug it into another basic linux distro, but you will not get any data striping or other protections. So if 1 drive dies, you will loose whatever data was stored on that disk. Because of that, I advise you to still think about partitioning your files even if you union them so that you understand your failure scope.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    Can you not just modify the paths to sync a dir to one and a dir to the other? Maybe there’s a sorting system, such as a plex library that you don’t want to break up, but I think you can point a system like that at a higher dir and have it sort things out and understand? Sounds like you need to silo your data.

  • GeorgimusPrime@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    You will need a way of connecting both your current 2TB disk and a new one at the same time. A USB hub (if you don’t have free USB ports) and second enclosure, or 2-bay disk dock (much cheaper than a NAS device and no networking required) will do.

    You can then combine their storage with mergerfs (available for most distros). Both disks will still work independently, and you can use indexing software like gwhere, cdcat or gcstar to scan each drive so you can tell where a particular file ends up.

    You might also be able to buy yourself some more space by using jdupes or rdfind to hardlink duplicate files.

  • digdilem@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    storing these files after I am done with them

    If you’re done with them, then move them onto a backup disk rather than keep them live and have a backup?

    I’ve been doing this for a long time. I move files locally to a “To-Archive” directory and once in a while, move them to several disks based on content. Films, tv, apps, games, books - that sort of thing.

    Once one disk is full, I use another old hdd in a disk caddy and label it “Books #2” and so on.

    I use a windows program called Cathy which indexes the files, making it easy to locate a file on whatever disk it’s on. Looks like there’s a linux version available too

    This works okay for me, and gives a use for old spinny hard drives. It’s not infallible, but for stuff that I could replace (ie, I downloaded it) then I consider it an acceptable risk. All media has a risk of becoming unreadable, but do be realistic about how much bother it would be to replace stuff.

    For data that’s unique (ie, I made it, plus OS backups) then I use an offline grandfather/father/son rotation once a month and once a year turn the oldest into an annual backup. (Fully explanation of my setup is here if you’re interested.