Rino Razzi's picture

Hi.

I've spent a full day trying a restore from a backup made with TKLBAM on TurnKey Hub.

I tryed to do a restore, just to be sure it works.

No way to get it.

Tryed the restore with 2 different instances.

The first is a TYPO3 Appliance based on TurnKey 12.0 running on M1.Medium instance on turnkey Hub  (M1.medium EBS-backed PVM - 3.75 GB RAM, 1 vCPU, 80G rootfs, 410 GB tmp).

The second like the first eccept for the instance type who is a M1.Small

(M1.small EBS-backed PVM - 1.7 GB RAM, 1 vCPU, 80G rootfs, 160 GB tmp)

On both instances is active the TKLBAM automatic back-up procedure.

Here is what I tryed.

 

=== Try of restore on M1.Medium instance  ==

Here is what I did:

1. created a snapshot of the instance 

2. Created a copy of the instance with  "Clone a new server from latest snapshot"

3. Accessed to Webmin of new instance and:

- stopped mysql, postfix, apache

- Went to TKLBAM section of webmin ad run a restore

 

After 1 hour of 90% CPU work TKLBAM stopped with an error.

Below there is the log of the restore procedure.

TKLBAM version is 4.7.1

It looks like a filename with strange chars ("/") cause the error (see last line of log).

Any idea?

 

> tklbam-restore 6 --noninteractive --time='2016-01-25' --skip-packages
Executing Duplicity to download s3://s3-eu-west-1.amazonaws.com/tklbam-xyuyxuyuyuyuy to /tmp/tklbam-3FCXZd 
============================================================================================================== 

// started squid: caching downloaded backup archives to /var/cache/tklbam/restore 

# duplicity --restore-time=2016-01-25 --archive-dir=/var/cache/duplicity --s3-unencrypted-connection s3://s3-eu-west-1.amazonaws.com/tklbam-xyxuyxyxyxyxyxy /tmp/tklbam-3FCXZd 
Synchronizing remote metadata to local cache... 
Copying duplicity-inc.20160203T011637Z.to.20160204T010114Z.manifest.gpg to local cache. 
Copying duplicity-new-signatures.20160203T011637Z.to.20160204T010114Z.sigtar.gpg to local cache. 
Last full backup date: Mon Jan 25 00:58:28 2016 

// stopping squid: download complete so caching no longer required 

Restoring system from backup extract at /tmp/tklbam-3FCXZd 
========================================================== 

FILES - restoring files, ownership and permissions 
-------------------------------------------------- 

Traceback (most recent call last): 
File "/usr/bin/tklbam-restore", line 553, in <module> 
main() 
File "/usr/bin/tklbam-restore", line 526, in main 
restore.files() 
File "/usr/lib/tklbam/restore.py", line 202, in files 
changes = Changes.fromfile(extras.fsdelta, limits) 
File "/usr/lib/tklbam/changes.py", line 184, in fromfile 
changes = [ Change.parse(line) for line in fh.readlines() ] 
File "/usr/lib/tklbam/changes.py", line 127, in parse 
return op2class[op].fromline(line[2:]) 
File "/usr/lib/tklbam/changes.py", line 81, in fromline 
return cls(*args) 
File "/usr/lib/tklbam/changes.py", line 92, in __init__ 
self.uid = self.stat.st_uid 
File "/usr/lib/tklbam/changes.py", line 67, in stat 
self._stat = os.lstat(self.path) 
OSError: [Errno 2] No such file or directory: '/var/www/shop.public_html/var/cache/lang_cache/C_GET_"\'");|]*{' 
Back

 

=== Try of restore on M1.Small instance  ==

Here is what I did:

1. created a snapshot of the instance 

2. Created a copy of the instance with  "Clone a new server from latest snapshot"

3. Accessed to Webmin of new instance and:

- stopped mysql, postfix, apache

- Went to TKLBAM section of webmin ad run a restore

After about 5 hours of 100% CPU the restore stopped because there was no more disk space on the root filesystem.

Before the restore started the root filesystem had 5 GB free on 80GB available.

Any idea for this?

 

Thanks

Forum: 
Rino Razzi's picture

Still trying to make tklbam restore working...

I also tried to use the "limits" to exclude the path who sowed in the error message.

Here is the command I run via command line:

tklbam-restore 6 --noninteractive --time='2016-01-25' --limits="/var/www/shop.public_html/var/cache/lang_cache" --skip-packages

Always the same error:

> tklbam-restore 6 --noninteractive --time='2016-01-25' --skip-packages
Executing Duplicity to download s3://s3-eu-west-1.amazonaws.com/tklbam-xyuyxuyuyuyuy to /tmp/tklbam-3FCXZd 
============================================================================================================== 

// started squid: caching downloaded backup archives to /var/cache/tklbam/restore 

# duplicity --restore-time=2016-01-25 --archive-dir=/var/cache/duplicity --s3-unencrypted-connection s3://s3-eu-west-1.amazonaws.com/tklbam-xyxuyxyxyxyxyxy /tmp/tklbam-3FCXZd 
Synchronizing remote metadata to local cache... 
Copying duplicity-inc.20160203T011637Z.to.20160204T010114Z.manifest.gpg to local cache. 
Copying duplicity-new-signatures.20160203T011637Z.to.20160204T010114Z.sigtar.gpg to local cache. 
Last full backup date: Mon Jan 25 00:58:28 2016 

// stopping squid: download complete so caching no longer required 

Restoring system from backup extract at /tmp/tklbam-3FCXZd 
========================================================== 

FILES - restoring files, ownership and permissions 
-------------------------------------------------- 

Traceback (most recent call last): 
File "/usr/bin/tklbam-restore", line 553, in <module> 
main() 
File "/usr/bin/tklbam-restore", line 526, in main 
restore.files() 
File "/usr/lib/tklbam/restore.py", line 202, in files 
changes = Changes.fromfile(extras.fsdelta, limits) 
File "/usr/lib/tklbam/changes.py", line 184, in fromfile 
changes = [ Change.parse(line) for line in fh.readlines() ] 
File "/usr/lib/tklbam/changes.py", line 127, in parse 
return op2class[op].fromline(line[2:]) 
File "/usr/lib/tklbam/changes.py", line 81, in fromline 
return cls(*args) 
File "/usr/lib/tklbam/changes.py", line 92, in __init__ 
self.uid = self.stat.st_uid 
File "/usr/lib/tklbam/changes.py", line 67, in stat 
self._stat = os.lstat(self.path) 
OSError: [Errno 2] No such file or directory: '/var/www/shop.public_html/var/cache/lang_cache/C_GET_"\'");|]*{' 

 

Can anybody help me?

At the moment I am using a backup who don't seam to work.

Do I have to use AWS snapshot for my backups? Or what else?

 

Thanks in advance for any help.

Rino Razzi

Archimede Informatica

Pisa, Italy

Jeremy Davis's picture

Deep apologies on the radio silence. I have been madly trying to push v14.1 out the door and as always these things take longer than I think it should...

Anyway, I am not at all sure about what the issue is and TBH am surprised that using the limits didn't work around it. It makes me wonder if the file in question is actually missing from your backup and that's what is causing the issue?

Looking at the path; it is possible that you could actually exclude that cache location anyway. Most webapps do not need their cache backedup as it can be reproduced. It may be worth trying to exclude the /var/www/shop.public_html/var/cache/ path from your backup.

Assuming that this is testing, then perhaps it's worth doing a new clean full backup (with the cache excluded) and trying the restore again? Obviously if you are trying to restore because you need the backup then that is a different matter. I suggest that you try downloading the backup first using the --raw-download switch, then try restoring the downloaded files. Like this:

tklbam-restore 1 --raw-download=/tmp/mybackup
tklbam-restore /tmp/mybackup

Also you mention in your first post that you ran out of room. You do need a fair bit of space. Essentially it downloads the compressed backup; it then extracts the archive; it then copies it across keeping track of any overwritten files (so they can be restored if you choose to rollback).

Add new comment