Rick's picture

Has anyone had issues with setting their TKLBAM volume sizes to 100MB-250MB?  I have a server backing up 160GB+ of data and at 25MB it creates around 6400 files.  I recently changed it to 100MB volume sizes, but I would prefer it to be more like 250MB.  Bandwidth speeds are not an issue.  I just want to speed up the backup/restore if possible.  A full backup took 53+ hours the last time I ran it.

Forum: 
Jeremy Davis's picture

AFAIK the volume size shouldn't effect the backup time that much. Actually in some cases, smaller volume size can increase the upload speed (very much depends on the quality of your internet connection).

Depending on your scenario, a larger volume size, may make a marginal inprovement, but it's most likely your real world upload bandwidth to AWS S3 that will be the bottleneck. That will depend on your ISP and how much traffic S3 is getting from your region. Just because you have a "fat pipe" won't necessarily mean you'll get superfast upload.

TBH I'm not sure how many threads TKLBAM is capable of using, but to saturate your bandwidth, more threads = faster. Obviously that will depend on how many threads your CPU is capable of handling (generally how many cores it has). Keep in mind though, if you saturate your upload, most other things will grind to a halt (e.g. web browsing etc).

I'm not sure what is in your backup, but 160GB is massive! FWIW most of mine are less than a GB (compressed). Having said that they are all fairly low traffic sites without a ton of customisations. Regardless, perhaps it's worth trying to tune your backup a bit?

An alternate (or additional) approach is to split your backups and have multiple smaller ones. If you go that path, I'd be inclined to have a "master" one that includes all the server config and the "main" app you are serving. Then another which contains all the massive files (that hopefully don't change that often?). It may take a bit of trial and error (and make sure you test everything thoroughly!) to get something that works how you want...

Rick's picture

I had seen that article on creating multiple backups and I was going to give that a try.  I would have a main backup with an exception to disclude my data directories and then a second, possibly even a third and fourth that would have a - / exception and then just include the data directories to break down the sizes.  I will need to take a look at the output of a du command to really tell which ones need to be backed up and how.

Any plans to add a multi-backup option to the interface in webmin at all?  I know that 160GB is huge and I am probably a rare case.

Jeremy Davis's picture

IMO ncdu is a much better option than old school du. You'll need to install it, but it's not very big and is only an apt-get away! :)

Your suggested implementation sounds like a great way to go IMO.

As for plans to extend TKLBAM Webmin config, ideally we'd really love to implement something like that. Unfortunately we have a serious labour bottleneck and just trying to keep on top of bug fixing and updating appliances is currently (more than) a full time job for us.

Having said that, I've added it to our issue tracker as a feature request! Whilst TKLBAM has had quite a few bugfixes, IMO its well overdue for a development sprint. If it's on the tracker then who knows, perhaps Liraz will include it when he next does that?

Add new comment