Chris Musty's picture

I have said it before but here it is again; I have a file server with about 100Gb of data that I want to use TKLBAM with - the problem is that my (and everyone elses at the moment) uplink is maxed at 1Mbps (thats Bits not Bytes!) and will take about 14 days to perform a full backup. Of course if I want to perform daily backups this wont work. (As a side note I have found an ISP that allows unlimited uploads)

Here is what I did to work around the issues;

  1. Create 2 file servers (or 2 shares) one is an archive the other is a working directory
  2. The working directory is backed up daily but changes must be limited to about 3Gb due to the maximum 7Gb uploaded in 24 hours (dont want it backing up during the day)
  3. If the archive needs to be changed (which happens) it is backed up (incrementally) over a weekend.
  4. Upon a complete server failure the fileserver can be launched in the cloud easily (albeit slower that a local file server) potentially the only delay in accessing files will be the time it takes to transfer the working directory to the cloud server.

Of course this is not ideal and takes away the autonomous nature of TKLBAM.

Question regarding this process

Can a backup be forced to perform incrementally manually after a full backup (also manual) is done? eg in the above situation the archived folder is backed up initially (by typing tklbam-backup) and only when data is added to the archive it needs to be backed up. Possibly this will only happen once a month so if a full backup is performed once a year there will only be 12 increments - quite acceptable.

If anyone else has a suggestion I am all ears!

Forum: 

Add new comment