What do you use for cloud file encryption?

2»

Comments

  • williewillie OG
    edited February 2022

    Thanks, will look. I see that it wants to compress my data, which is bad because any large data collection will usually already be compressed. E.g. linux isos are gzipped or whatever.

    Added: at first glance Kopia looks very similar to borgbackup, in both features and style. I wonder what is going on. I do like that it supports sftp and other backends such as s3 directly.

  • I decided to go with borg for now since it's supported by Hetzner. It looks like they have a borg server (not just ssh) at the remote side. I have a backup running with most options left at default, getting 35-40MB/s from a local dedi, not terrible but I think I may have been getting 2x that from plain sftp a while back. I'll do another bench when this current thing finishes. I have around 3TB to back up from this box, so that's around 24h of transfer at this speed, ugh.

    It is doing deduping, compression, and encryption by default. There are 2 processes running, borg itself at around 75% cpu and ssh at around 60% (this is a 4 core box so 400% are available). I guess since neither process is hosing a full core, I'm probably limited by the speed of the ssh transfer.

    Anyway I guess I can live with this. It is mostly cold data so once the transfer finishes I won't have to think about it for a while.

  • Added to above: my Hetzner dedi to storage box transfer slowed to maybe 15MB/s(?) for part of a day but then returned to the earlier 35MB/s or so. My backup of that dedi is now finished and it feels nice to have a complete copy in one place.

    I'm getting around 11MB/s from my BHS Kimsufi to the storage box which is the limit of its 100 mbit network port, so pretty good. Also getting around 8MB from BuyVM Las Vegas to the storage box, which is somewhat disappointing but the total amount of data there is lower, so it will be ok. This is with both backups (BHS and BuyVM) proceeding at the same time, to separate borg repos on the same storage box. I don't know if that matters. I have a 5TB storage box and a 10TB costs almost 2x as much, so if the 5TB gets full I might add a second one, rather than enlarge to a 10TB, with the idea of getting more bandwidth. But, the 5TB costs only about 2x what a 1TB costs.

  • YmpkerYmpker OGContent Writer

    I have started using Cryptomator (after Herr @Amitz recommended them) and I have been really happy with it so far :)
    I have also looked a bit into rclone and it also seemed very cool (if you wanna use cli).

    Thanked by (1)Amitz
  • I've been watching this thread and wondering about longevity. What happens when we want to decrypt and view our backed up files twenty years from now?

    How will the solutions here work to decrypt and view twenty years from now?

    Thanks! Friendly greetings from Mexico! :)

    Thanked by (1)Ympker

    Tom. 穆坦然. Not Oles. Happy New York City guy visiting Mexico! How is your 文言文?
    The MetalVPS.com website runs very speedily on MicroLXC.net! Thanks to @Neoon!

  • @Not_Oles said: I've been watching this thread and wondering about longevity. What happens when we want to decrypt and view our backed up files twenty years from now?

    For this reason I would not even think of using anything that is closed source. If we have the source code we ought to be able to get it working on a 2042 machine somehow. I kept using a 1980s text formatter til not that long ago.

    Thanked by (1)Not_Oles
  • @Not_Oles said:
    How will the solutions here work to decrypt and view twenty years from now?

    Veracrypt has a portable version which I store in the same cloud storage as the encrypted drive, so it gets backed up too. Thus decrypting 20 years later should be simple.

    I would be concerned if an app required any external dependencies like a network call to decrypt.

    Thanked by (2)Not_Oles Ympker
  • In my experience, the solution is to migrate backups as the technology shifts. I've used URBackup, BackupPC, burp, etc., each for many years, and each time redid full backups when migrating. I didn't keep the history with incremental diffs. Important historical data comes with me as part of my current backup set. 20 years from now if I want to find an important document from 2022, it'll be indexed as part of the 2042 backup set, using whatever backup software I'll be using then.

Sign In or Register to comment.

This Site is currently in maintenance mode.
Please check back here later.

→ Site Settings