Hi, 

I've neen using a number of turnkey appliances for a while now and have been very happy with them. TKLBAM has been especially useful on a number of occasions.

I ran into a problem with an overnight backup.

When tklbam-backup ran it created a /TKLBAM directory to build the backup in before sending off to our backup server. This managed to fill the root filesystem. I tried setting up /TKLBAM as a symbolic link, however this bombed with an error

Traceback (most recent call last):
  File "/usr/bin/tklbam-backup", line 510, in <module>
    main()
  File "/usr/bin/tklbam-backup", line 472, in main
    shutil.rmtree(b.extras_paths.path)
  File "/usr/lib/python2.7/shutil.py", line 232, in rmtree
    onerror(os.path.islink, path, sys.exc_info())
  File "/usr/lib/python2.7/shutil.py", line 230, in rmtree
    raise OSError("Cannot call rmtree on a symbolic link")
OSError: Cannot call rmtree on a symbolic link

I had a poke around in the backup.py and found where the directory is originally created which has some code to set the backup_root to /

Is is possible to configure backup_root to be somewhere else with more diskspace. I tried adding the option to /etc/tklbam/conf, but that threw an unknown option error.

Thanks

Jon

Forum: 

Hi, 

I have done some more digging.  I am using the scp options rather than the hub to backup to our local backup server which has worked fine.

It looks like the /var/cache/duplicity was large. Duplicity doesn't seem to be cleaning up after itself and has got full and incrementals going back 4 months. 

I did try deleting some older files from the cache but the next time I ran a backup they were recreated.

Can I use the copy of duplicity in /usr/lib/tklbam/deps/bin to clean up the cache and remote files. I only really need to keep a months worth?

cheers

Jon

Jeremy Davis's picture

TBH I'm not that familiar with the inner workings of TKLBAM so am probably not going to be that helpful. However the TKLBAM source code is on GitHub so if you can work out how to improve, please feel free to provide a pull request.

As for the duplicity cache. AFAIK it should only be keeping the last full backup and all incrementals since then. Although it sounds like that isn't actually what it is doing...

I don't see why you couldn't use the included version of duplicity to clean up the cache but I can't be sure of the implications. I suggest that you give it a go and see what happens. So long as you have a current backup, worst case scenario you can always restore! :)

Another possibility is instead of using a symlink, perhaps you could use a bind mount for /TKLBAM? I haven't tested it but perhaps it's possible? I.e. if you have a HDD with plenty fo space mounted to /media/big-volume then you could bind it to /TKLBAM like this:

mount --bind /media/big-volume /TKLBAM

Hi Jeremy, 

I have been having a look at the code, however my python is not great. I'm not sure I can manage production grade code :)

I have a temporary workaround in place by not backing up a couple of big database tables we don't really need to recover. This should give a bit of time to get to the bottom of it. 

I did think of using a bind mount but it looks like TKLBAM creates and destroys the directory as part of the backup run. I believe that is why the symbolic link attempt failed. I'm not sure how it would deal with a mount point.

There is some partial good news. I have managed to use the duplicity client to give me a collection-status of the remote store. Other operations require a passphrase which I thought I knew, but duplicity is not accepting. The TKLBAM escrow file worked fine for my restore testing, but it looks like the passphrase I used for that is not what duplicity is after.

From the information I got from duplicity collection-status, I have moved some of the old backup chains on the destination server, and that doesn't throw any errors when i try the collection-status command again.

I have tried a backup and the removal of the files from the target gets replected in the local cache. 

So it does look like I need to clear old backup files from the target URL and then the cache will get cleared the next time  a backup runs.

thanks for your help

Jon

Add new comment