Example usage scenario

Alon is developing a new web site. He starts by deploying TurnKey LAMP to a virtual machine running on his laptop. This will serve as his local development server. He names it DevBox.

Customizing DevBox

  • creating user 'alon'.
  • extracting an archive of his web application to /var/www
  • tweaking Apache configuration directives in /etc/apache2/httpd.conf until his web application works.
  • installing php5-xcache via the package manager
  • enabling xcache by editing a section in /etc/php5/apache2/php.ini
  • creating a new database user with reduced privileges for his web application.
  • configuring and installing the web application, which creates a new MySQL database.

After a few days of hacking on the web application, Alon is ready to show off a prototype of his creation to some friends from out of town.

Migrating DevBox to CloudBox

He logs into the TurnKey Hub and launches a new TurnKey LAMP server in the Amazon EC2 cloud. He names it CloudBox.

If Alon is using the new TurnKey Linux 11.0 appliances, TKLBAM comes pre-installed. With older versions he will need to install it first on both DevBox and CloudBox:

apt-get update
apt-get install tklbam

Alon provides the API Key from his TurnKey Hub account's user profile to link TKLBAM to the TurnKey Hub. He can do that via the new Webmin module, or on the command line:

tklbam-init QPINK3GD7HHT3A

On DevBox Alon runs a backup:

root@DevBox:~# tklbam-backup

How the backup works behind the scenes

TKLBAM downoads from the Hub a profile for the version of TurnKey LAMP Alon is using. The profile describes the state of DevBox right after installation, before Alon customized it. This allows TKLBAM to detect all the files and directories that Alon has added or edited since. Any new packages Alon installed are similarly detected.

As for his MySQL databases, it's all taken care of transparently but if Alon dug deeper he would discover that their full contents are being serialized and encoded into a special file structure optimized for efficiency on subsequent incremental backups. Between backups Alon usually only updates a handful of tables and rows, so the following incremental backups are very small, just a few KBs!

When TKLBAM is done calculating the delta and serializing database contents, it invokes Duplicity to encode backup contents into a chain of encrypted backup volumes which are uploaded to Amazon S3.

Restoring to CloudBox

When Alon's first backup is complete, a new record shows up in the Backups section of his TurnKey Hub account.

Now to restore the DevBox backup on CloudBox:

root@CloudBox:~# tklbam-list
# ID  SKPP  Created     Updated     Size (GB)  Label
   1  No    2010-09-01  2010-09-01  0.02       TurnKey LAMP

root@CloudBox:~# tklbam-restore 1

When the restore is done Alon points his browser to CloudBox's IP address and is delighted to see his web application running there, exactly the same as it does on DevBox.

What happened?

Alon, a tinkerer at heart, is curious to learn more about how the backup and restore process works. By default, the restore process reports what it's doing verbosely to the screen. But Alon had a hard time following the output in real time, because everything happened so fast! Thankfully, all the output is also saved to a log file at /var/log/tklbam-restore.

Alon consults the log file and can see that only the files he added or changed on DevBox were restored to CloudBox. Database state was unserialized. The xcache package was installed via the package manager. User alon was recreated. It's uid didn't conflict with any other existing user on CloudBox so the restore process didn't need to remap it to another uid and fix ownership of Alon's files. Not that it would matter to Alon either way.

It's all automagic.


Liraz Siri's picture

If you read the documentation carefully you'll notice you can backup and restore to any storage backend supported by Duplicity, including local files, ftp, ssh, etc. That's what the --address command line option is for. Granted it isn't as easy to use as the default use case scenario since you need to handle authentication and key management yourself, but even that is still vastly easier than a conventional backup.