TurnKey Linux Virtual Appliance Library

TKLBAM fails with error

Hi,

First time using TK Linux and generally very impressed.  I'm using TK Linux 12.0 for Rails.  

However, a couple of issues:

1. DHCP failed on my home setup using a WRT54G with Tomato 1.19 (WiFi access point firmware) - static was fine though.

2. TKLBAM fails to complete - this is more serious as the main reason I am using TK Linux is for the TKLBAM migration scenario

I'm on a DSL connection with about 1 Mbps upstream, and it's working fine while the backup was being run.

Is there a log file somewhere?  Seems like there should be by default ... if not, how do I enable one?

 

 
Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121203T121306Z.vol12.difftar.gpg' failed (attempt #5, reason: error: [Errno 32] Broken pipe)
Giving up trying to upload s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121203T121306Z.vol12.difftar.gpg after 5 attempts
Backend error detail: Traceback (most recent call last):
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1405, in <module>
    with_tempdir(main)
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1398, in with_tempdir
    fn()
  File "/usr/lib/tklbam/deps/bin/duplicity", line 1368, in main
    full_backup(col_stats)
  File "/usr/lib/tklbam/deps/bin/duplicity", line 513, in full_backup
    globals.backend)
  File "/usr/lib/tklbam/deps/bin/duplicity", line 412, in write_multivol
    (tdp, dest_filename, vol_num)))
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 145, in schedule_task
    return self.__run_synchronously(fn, params)
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 172, in __run_synchronously
    ret = fn(*params)
  File "/usr/lib/tklbam/deps/bin/duplicity", line 411, in <lambda>
    async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num),
  File "/usr/lib/tklbam/deps/bin/duplicity", line 309, in put
    backend.put(tdp, dest_filename)
  File "/usr/lib/tklbam/deps/lib/python2.6/dist-packages/duplicity/backends/_boto_single.py", line 230, in put
    raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
BackendException: Error uploading s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121203T121306Z.vol12.difftar.gpg
 
BackendException: Error uploading s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121203T121306Z.vol12.difftar.gpg
Traceback (most recent call last):
  File "/usr/bin/tklbam-backup", line 366, in <module>
    main()
  File "/usr/bin/tklbam-backup", line 321, in main
    b.run(opt_debug)
  File "/usr/lib/tklbam/backup.py", line 237, in run
    backup_command.run(passphrase, self.credentials, debug=debug)
  File "/usr/lib/tklbam/duplicity.py", line 78, in run
    raise Error("non-zero exitcode (%d) from backup command: %s" % (exitcode, str(self)))
duplicity.Error: non-zero exitcode (23) from backup command: duplicity --verbosity=5 --archive-dir=/var/cache/duplicity --volsize=25 --full-if-older-than=1M --include=/TKLBAM --gpg-options=--cipher-algo=aes --include-filelist=/TKLBAM/fsdelta-olist --exclude=** --archive-dir=/var/cache/duplicity --s3-unencrypted-connection --allow-source-mismatch / s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq
 

Bump

Any comments on this?  I'm not the only one with TKLBAM problems, which is a shame since I signed up for a plan based on TKLBAM, but haven't been able to use it to migrate from a local VM to EC2, which was the main draw of using TK Linux.

Is there a log file I'm missing here?

Same / similar problem here

Hello

I'm afraid won't be of much help but I have a very similar problem with my TKL based server for about a month. Just realised the backup are failing (there is not obvious warning - you really have to dig into the logs).

I'm going to investigate a report my findings.


more log info

Hi Alexandre,

Thanks for the comment - I am getting some successful backups but about 50% fail.  Must be something to do with TKLBAM or S3 (using eu-west since that's where my EC2 instances are).

From /var/log/tlkbam-backup:

 

################################

### Sat Dec 29 08:09:32 2012 ###

################################


Local and Remote metadata are synchronized, no sync needed.

Last full backup date: Sat Dec 15 06:57:44 2012

No extraneous files found, nothing deleted in cleanup.

Local and Remote metadata are synchronized, no sync needed.

Last full backup date: Sat Dec 15 06:57:44 2012

Last full backup is too old, forcing full backup

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg' failed (attempt #1, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg' failed (attempt #2, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg' failed (attempt #3, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg' failed (attempt #4, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg' failed (attempt #5, reason: timeout: timed out)

Giving up trying to upload s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg after 5 attempts

BackendException: Error uploading s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121229T071530Z.vol2.difftar.gpg


Traceback (most recent call last):

  File "/usr/bin/tklbam-backup", line 321, in main

    b.run(opt_debug)

  File "/usr/lib/tklbam/backup.py", line 237, in run

    backup_command.run(passphrase, self.credentials, debug=debug)

  File "/usr/lib/tklbam/duplicity.py", line 78, in run

    raise Error("non-zero exitcode (%d) from backup command: %s" % (exitcode, str(self)))

Error: non-zero exitcode (23) from backup command: duplicity --archive-dir=/var/cache/duplicity --volsize=25 --full-if-older-than=1W --include=/TKLBAM --gpg-options=--cipher-algo=aes --include-filelist=/TKLBAM/fsdelta-olist --exclude=** --archive-dir=/var/cache/duplicity --s3-unencrypted-connection --allow-source-mismatch / s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq


################################

### Sun Dec 30 08:38:42 2012 ###

################################


Local and Remote metadata are synchronized, no sync needed.

Last full backup left a partial set, restarting.

Last full backup date: Sat Dec 29 07:15:30 2012

RESTART: The first volume failed to upload before termination.

         Restart is impossible...starting backup from beginning.

Local and Remote metadata are synchronized, no sync needed.

Last full backup date: Sat Dec 15 06:57:44 2012

Last full backup is too old, forcing full backup

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol1.difftar.gpg' failed (attempt #1, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg' failed (attempt #1, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg' failed (attempt #2, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg' failed (attempt #3, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg' failed (attempt #4, reason: timeout: timed out)

Upload 's3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg' failed (attempt #5, reason: timeout: timed out)

Giving up trying to upload s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg after 5 attempts

BackendException: Error uploading s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq/duplicity-full.20121230T070834Z.vol4.difftar.gpg


Traceback (most recent call last):

  File "/usr/bin/tklbam-backup", line 321, in main

    b.run(opt_debug)

  File "/usr/lib/tklbam/backup.py", line 237, in run

    backup_command.run(passphrase, self.credentials, debug=debug)

  File "/usr/lib/tklbam/duplicity.py", line 78, in run

    raise Error("non-zero exitcode (%d) from backup command: %s" % (exitcode, str(self)))

Error: non-zero exitcode (23) from backup command: duplicity --volsize=25 --full-if-older-than=1W --include=/TKLBAM --gpg-options=--cipher-algo=aes --include-filelist=/TKLBAM/fsdelta-olist --exclude=** --archive-dir=/var/cache/duplicity --s3-unencrypted-connection --allow-source-mismatch / s3://s3-eu-west-1.amazonaws.com/tklbam-x3gl2zp27g4taseq
Jeremy's picture

Hmmm....

Obviously from your logs S3 is timing out. Why that would be occurring though I have no idea. Have you tried contacting the devs directly using the Hub Feedback (blue button on the left hand side)? Might be worth a shot. Maybe they have some ideas of where your problems may be?

I have contacted the

I have contacted the developers, and they are looking into this.  

What might help in diagnosing this is a simple ping test to the s3 host, though I suspect this is a service level issue.

My TKLBAM backups from a couple of EC2 instances have not failed, so clearly it is something to do with network path - but all other network activity from the VM + laptop is quite reliable, with only very occasional dropouts that should be resolved by the 5 retries for S3.

non-zero exitcode

When I backup via TKLBAM the backup runs fully with 0 erros and the last sentence is: warning: can't update Hub of backup completed: error(65, 'GnuTLS recv error (-9): A TLS packet with unexpected length was received.')

If I try to restore then I get the message: duplicity.Error: non-zero exitcode (23) from backup command: duplicity --archive                                                      -dir=/var/cache/duplicity --s3-unencrypted-connection s3://s3-eu-west-1.amazonaw                                                      s.com/tklbam-vlmhuag7zalklrsm /tmp/tklbam-VxiFwg

This is a persistent error which keeps coming every few weeks...

If I run the backup-restore again the server comes with:

##########################
## FIXCLOCK HOOK FAILED ##
##########################

Amazon S3 and Duplicity need a UTC synchronized clock so we invoked the
following command::

    ntpdate -u pool.ntp.org

Unfortunately, something went wrong...

 1 Oct 11:57:44 ntpdate[30876]: no server suitable for synchronization found
Traceback (most recent call last):
  File "/usr/bin/tklbam-restore", line 553, in <module>
    main()
  File "/usr/bin/tklbam-restore", line 488, in main
    hooks.restore.pre()
  File "/usr/lib/tklbam/hooks.py", line 76, in pre
    self._run("pre")
  File "/usr/lib/tklbam/hooks.py", line 71, in _run
    _run_hooks(self.LOCAL_HOOKS, (self.name, state))
  File "/usr/lib/tklbam/hooks.py", line 40, in _run_hooks
    (fpath, " ".join(args), e.exitcode))
hooks.HookError: `/etc/tklbam/hooks.d/fixclock restore pre` non-zero exitcode (1)

Jeremy's picture

As noted above, this appears to be some sort of network issue

Out of interest have you tried manually running the ntpdate command? (As noted in your error message).

Post new comment

The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <p> <span> <div> <h1> <h2> <h3> <h4> <h5> <h6> <img> <map> <area> <hr> <br> <br /> <ul> <ol> <li> <dl> <dt> <dd> <table> <tr> <td> <em> <b> <u> <i> <strong> <font> <del> <ins> <sub> <sup> <quote> <blockquote> <pre> <address> <code> <cite> <strike> <caption>

More information about formatting options

Leave this field empty. It's part of a security mechanism.
(Dear spammers: moderators are notified of all new posts. Spam is deleted immediately)