Peter's picture

Just spent 4 days patiently waiting for a 39GB to restore, and on the fourth day the webpage monitoring the restore "froze" so did the system the data was being restored to... so I crossed my fingers and rebooted.  Zip restored.  No log errors that I found even indicate a restore was underway.

Really there MUST be a better way to migrate large sites.  I have a Drupal Sute with 40GB media that is running local on a bare metal server with TK 13 (no hypervisor) , trying to migrate to a new server's TK VM with ESXI VM host.  

Frankly this is the third large migration I have tried to perform and TKBM always lets me down.  Its fine on a small data set or to backup a fresh install however when I get several gigs of data - things get awful cumbersome and I always end up resortying to some local "sneaker net" to get my data. 

Is there any plans to improve thee restore feature?  Guru's any advice for how to verify large data sets that takes days to restore IS BEING RESTORED?

Jeremy Davis's picture

But I think that your suggestion is a good one. Probably along with a verbosity setting which would allow users to set how verbose the output was...

And I have just put up a feature request against TKLBAM on the Issue tracker. Please check that I have maintained the spirit of what you are saying/suggesting.

But back to your problem at hand; the restore happens in stages; first - download all data; second - unarchive all data; third - restore all data. So if the previous stage has not completed; then the next stage won't start. If it did not complete the final stage then it will not look like it has restored anything (because it hasn't - although maybe it's downloaded a heap of stuff).

TKLBAM is designed to restart where it was up to if it was interrupted so perhaps that's worth a try? If you check in the config where it is saving the download data then you could see how much is there (IIRC by default it creates a new dir /TKLBAM). Also make sure that you have LOTS of free space as TKLBAM uses a minimum of 2x the size of the backup (usually lots more) plus it also needs room left over to store rollback info.

To be on the safe side; personally I'd want ~110GB free to restore a 30GB backup: 30GB for the backup download; 75GB (2.5 x) for the uncompressed data and 5GB (probably massive overkill) for the rollback data.

Logs should be in the usual spot: /var/log and IIRC restore logs should be called 'tklbam-restore'.

Jeremy Davis's picture

But doesn't do any restore until it has downloaded everything and uncompressed it all.

When you say "The target VM booted to black" that sounds like some whole other issue... Either some VM issue completely unrelated to TKLBAM (which perhaps was the cause of TKLBAM freezing in the first place). Or perhaps even though TKLBAM appeared to have frozen it was doing something very important (not sure what...) and being interrupted by a reboot left something a bit broken...? When I say "TKLBAM is designed to restart where it was up to if it was interrupted" I mean when you relaunch TKLBAM; not when you reboot the server...

FWIW you can also use TKLBAM to do manual backup/restore/migration and/or do the restore in stages. Have a dig through the TKLBAM man[ual] pages for options available.

Jeremy Davis's picture

I don't know much about VMware products, nor anything about your network, but 40mbps sounds ridiculous...

Surely there is something wrong going on there...

As a workaround, could you perhaps create a new vHDD; attach it to the server you're backing up; do a backup dump, then reattach the vHDD to the new server and do a restore...?

Jeremy Davis's picture

Glad to hear that you're up and running...

As for loading new volumes on ESXi; surely there must be a better way... Although as I've never used it so I have no idea. Personally I swear by ProxmoxVE as my hypervisor of choice! :)

Add new comment