TurnKey Linux Virtual Appliance Library

TKLBAM gives an error | Bigger than max_allowed_packet

Clint Thomson's picture

When restoring a server using TKLBAM the following error is displayed when TKLBAM startes to restore a database:

 

mysql got a packet bigger than max_allowed_packet

Any guidance would be appreciated.

Clint Thomson's picture

Resolved

I used Webmin to increase the size of the max_allowed_packet value to 50M and it fixed this problem.


Liraz Siri's picture

Thanks for sharing the solution!

Glad you got it worked out and thanks for sharing the solution. This way anyone who runs into this problem will search the site (or Google it) and bump into the solution.
Liraz Siri's picture

Yes, I'm planning on fixing the mysql max packet issue

It seems it's quite common for more serious users to run into this problem, so I plan on looking into fixing this in a way that is transparent to the user. I had this on my todo list before but I'm bumping up the priority.
L. Arnold's picture

I wonder if you can 'pre screen" the data for this.

I started having this problem in Magento, now in Joomla as well.  In Joomla just now it did not "break" the restore, but actually thought it was successful, even though when I looked at it, I did have a max-allowed-packets error on one file.

In Magento that would completely break the Restore process.  Joomla version thought all was good.

The lesson likely is to simply increase the packets permanently but I think there is some danger there as well.

I get another error :-(

Hi guys

I have a TK Lamp 11.1 running zabbix 1.8.1 and successfuly used tklbam-backup.

Unfortunately, when I apply the proposed fix to go pass the max_allowed_packet error when using tklbam-restore, i first need to increase the proposed value to 500 but then, instead I get the error detailed later below.

The vm has 1024MB of ram allowed to it, the zabbix services are closed during the restore and only 52MBs or RAM are used before the restore.

How can I fix this problem that's preventing me from putting back the vm in production?

many thanks in advance :-)

... 

snip

....

table: zabbix/items

ERROR 2013 (HY000) at line 3027198: Lost connection to MySQL server during query
Traceback (most recent call last):
  File "/usr/bin/tklbam-restore", line 313, in <module>
    main()
  File "/usr/bin/tklbam-restore", line 301, in main
    restore.database()
  File "/usr/lib/tklbam/restore.py", line 106, in database
    limits=self.limits.db, callback=mysql.cb_print())
  File "/usr/lib/tklbam/mysql.py", line 402, in restore
    fs2mysql(mysql(), myfs, **kws)
  File "/usr/lib/tklbam/mysql.py", line 365, in fs2mysql
    MyFS_Reader(myfs, limits, skip_extended_insert, add_drop_database).tofile(fh, callback)
  File "/usr/lib/tklbam/mysql.py", line 359, in tofile
    database.tofile(fh, callback)
  File "/usr/lib/tklbam/mysql.py", line 252, in tofile
    table.tofile(fh)
  File "/usr/lib/tklbam/mysql.py", line 308, in tofile
    fh.write("(%s)" % row)
IOError: [Errno 32] Broken pipe
Liraz Siri's picture

Need more information

MySQL seems to be dying on you in the middle of a restore. I haven't seen that happen before but maybe if you look in the logs you can figure out what is going on. Maybe you are running out of memory?

Thanks for your help This

Thanks for your help

This is a very basic setup (tk lamp + updates + zabbix compiled from source)

I even raised the ram to 1.5G for the restore to no avail.

If I skip the restore of mysql databases, the tklbam-restore works fine.

Now, before I take time to investigate further, can you show me how to extract the mysql backup from the archive so I can try to restore it manually myself ???

Many thanks in advance :-)

Champignol

Liraz Siri's picture

How to extract a mysqldump from your tklbam backup

I can see you're in a pickle so I'll try to help and hopefully we can get to the bottom of this and prevent it from happening to others in the future.

First we'll need to hack TKLBAM so it keeps around the "mysqlfs" file structure your MySQL backup is stored in. Normally this is deleted. Apply this patch to tklbam's source code:


diff --git a/restore.py b/restore.py
index 8094d11..576486d 100644
--- a/restore.py
+++ b/restore.py
@@ -87,6 +87,7 @@ class Restore:

         extras_path = TempDir(prefix="tklbam-extras-")
         os.rename(backup_archive + backup.Backup.EXTRAS_PATH, extras_path)
+        shutil.copytree(extras_path, "/tmp/extras")

         self.extras = backup.ExtrasPaths(extras_path)
         self.rollback = Rollback.create() if rollback else None
Now restore the backup with --skip-database. After the restore your data should be in /tmp/extras/myfs, but it is encoded so we'll need to convert it back to mysqldump using this script:

#!/usr/bin/python
"""
Map a filesystem created by mysql2fs back to MySQL

Options:
    --tofile=PATH           Write mysqldump output to file (- for STDOUT)
    -v --verbose            Turn on verbosity

    --skip-extended-insert  Skip extended insert (useful in debugging)
    --add-drop-database     Drop databases and then recreate them

Supports the following subset of mysql(1) options:

    -u --user=USER 
    -p --password=PASS

       --defaults-file=PATH
       --host=HOST

"""
import sys
import getopt

import mysql

def usage(e=None):
    if e:
        print >> sys.stderr, "error: " + str(e)

    print >> sys.stderr, "Syntax: %s [-options] path/to/myfs [ -?database/table ... ] " % sys.argv[0]
    print >> sys.stderr, __doc__.strip()
    sys.exit(1)

def main():
    try:
        opts, args = getopt.gnu_getopt(sys.argv[1:], 'u:p:v', 
                                       ['verbose', 'tofile=',
                                        'skip-extended-insert',
                                        'add-drop-database',
                                        'user=', 'password=', 'defaults-file=', 'host='])
    except getopt.GetoptError, e:
        usage(e)

    opt_verbose = False
    opt_tofile = None
    opt_skip_extended_insert = False
    opt_add_drop_database = False
    myconf = {}
    for opt, val in opts:
        if opt in ('-v', '--verbose'):
            opt_verbose = True
        elif opt == '--tofile':
            opt_tofile = val
        elif opt == '--skip-extended-insert':
            opt_skip_extended_insert = True
        elif opt == '--add-drop-database':
            opt_add_drop_database = True
        elif opt in ('-u', '--user'):
            myconf['user'] = val
        elif opt in ('-p', '--password'):
            myconf['password'] = val
        elif opt == "--defaults-file":
            myconf['defaults_file'] = val
        elif opt == "--host":
            myconf['host'] = val
        else:
            usage()

    if not args:
        usage()

    myfs = args[0]
    limits = args[1:]

    if opt_tofile:
        if opt_tofile == '-':
            fh = sys.stdout
        else:
            fh = file(opt_tofile, "w")
    else:
        fh = mysql.mysql(**myconf)

    callback = None
    if opt_verbose:
        print "destination: " + fh.name
        callback = mysql.cb_print()

    if opt_verbose:
        pass

    mysql.fs2mysql(fh, myfs, limits, callback, 
                   opt_skip_extended_insert,
                   opt_add_drop_database)

if __name__ == "__main__":
    main()
Save this script to the same directory where tklbam's source code lives (e.g., as fs2mysql.py). Now you can run the following command:
./fs2mysql.py /tmp/extras/myfs --tofile=/tmp/mymysqldump
When you get to the bottom of this, please try to isolate the issue and report back after you figure what is going on.

Yes sir!

Thanks a zillion.

Will take a look at this and come back to you.

What I have found out up to now is that your hunch was right, top definately shows me that at some point during the database restore that the system needs to much ram dans the swaps to death until mysql dies then restarts, goes in crash recovery and complains in the logs that the database was not closed properly.

Hopefully in the end we'll find a way from preventing this problem because as you say, many could have the same issue arise during an urgent needed restore of the database.

What is od is the fact that before the restore and while everything is running , the ram is used to less than 25% of capacity but during the tklbam-restore it just sky rockets to kingdom come!!!

I'm lucky i'm not running on bare metal as provisioning more ram/swap space could be more problematic than on a vm hehehe

i'll come back to post my findings later tonight

Thanks again for your support :-)

Champignol

update

after snapshoting the vm after restoring everything but the database, i raised the ram to 2G (the max i can givi it on this host) and raised to swap space accordingly

Now, when raising the max_allowed_packet to 500MB (or it fails with initial errors, we'll def. have to address this one too) and running tklbam-restore 6 --skip-files --skip-packages --no-rollback, at one point near the end of the restore, (70 out of 88 tables) ram raises to 1961MB and swap to 1270MB but eventually the restore script returns this new error then quits

 

table: zabbix/items
ERROR 5 (HY000) at line 3027198: Out of memory (Needed 3110484 bytes)
Traceback (most recent call last):
  File "/usr/bin/tklbam-restore", line 313, in <module>
    main()
  File "/usr/bin/tklbam-restore", line 301, in main
    restore.database()
  File "/usr/lib/tklbam/restore.py", line 106, in database
    limits=self.limits.db, callback=mysql.cb_print())
  File "/usr/lib/tklbam/mysql.py", line 402, in restore
    fs2mysql(mysql(), myfs, **kws)
  File "/usr/lib/tklbam/mysql.py", line 365, in fs2mysql
    MyFS_Reader(myfs, limits, skip_extended_insert, add_drop_database).tofile(fh, callback)
  File "/usr/lib/tklbam/mysql.py", line 359, in tofile
    database.tofile(fh, callback)
  File "/usr/lib/tklbam/mysql.py", line 252, in tofile
    table.tofile(fh)
  File "/usr/lib/tklbam/mysql.py", line 308, in tofile
    fh.write("(%s)" % row)
IOError: [Errno 32] Broken pipe
 
To make sure this is memoty related, I will take the time to copy the vm to a dev host that has ample ram to validate the same restore procedure with 4G of ram and will come back to report.
 
Wish me good luck
Liraz Siri's picture

how big is your database / mysqldump?

TKLBAM should just be pumping data into mysql. It shouldn't be consuming too much memory. MySQL on the other hand probably is. I'm very curious what kinds of scale causes MySQL to consume so much memory.

Same problem here

I'm getting the same error when I try and restore.  It says out of memory, but no amount of RAM I throw at it helps.  Did anyone ever try manually restoring the mysqldump?  Did it work?

i might be onto something

if i use the original tkl lamp 11.1 my.cnf and restore only one table at a time I eventually get this python error;

 

# tklbam-restore 6 --skip-files --skip-packages --no-rollback --limits="-mysql:mysql -mysql:zabbix mysql:zabbix/zabbix/auditlog"
Restoring duplicity archive from s3://s3.amazonaws.com/tklbam-###########
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Apr 19 02:48:42 2011
 
Restoring databases
===================
 
Traceback (most recent call last):
  File "/usr/bin/tklbam-restore", line 313, in <module>
    main()
  File "/usr/bin/tklbam-restore", line 301, in main
    restore.database()
  File "/usr/lib/tklbam/restore.py", line 106, in database
    limits=self.limits.db, callback=mysql.cb_print())
  File "/usr/lib/tklbam/mysql.py", line 402, in restore
    fs2mysql(mysql(), myfs, **kws)
  File "/usr/lib/tklbam/mysql.py", line 365, in fs2mysql
    MyFS_Reader(myfs, limits, skip_extended_insert, add_drop_database).tofile(fh, callback)
  File "/usr/lib/tklbam/mysql.py", line 345, in __init__
    self.limits = self.Limits(limits)
  File "/usr/lib/tklbam/mysql.py", line 89, in __init__
    database, table = limit.split('/')
ValueError: too many values to unpack
Liraz Siri's picture

typo in command line

zabbix/zabbix/auditlog is not a legal limitation. You probably meant zabbix/auditlog. Also I don't recommend trying to exclude the mysql database. You'll need it for permissions to work.

for some reason I hadn't seen

for some reason I hadn't seen the duplication in zabbix/zabbix/auditlog. I volontarly skipped the mysql database coz I had restored it earlier and wanted to validate restoring of individual tables.

Meanwhile, I successfully patched the restore.py thanks to your recomendations and extracted the mymysqldump file.

Now, even with 6GIGs of ram and I don't know how many different permutations of my.cnf there is no way I can do a complete restore without havin mysqld die around 3.2GB of ram even with a direct restore using mysql command line described below and a mymysqldump file of 335MB

I wonder if anyone else has problems restoring a innodb database of this relative size in TKL LAMP 11.1. It's common for people to asume that if the backup works, the restore will lol :-)

In the mean time while I get some more of your wit and tips, I'll go to sleep and try to restore it tomorow in another lamp stack vm to compare the result.

Thanks a zillion and hopefuly we'll get a solution soon to share with others

Champignol

ulimit -a
	core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
restore.sh
#!/bin/bash
(
echo "SET AUTOCOMMIT=0;"
echo "SET FOREIGN_KEY_CHECKS=0;"
echo "SET unique_checks=0;"
cat /tmp/mymysqldump
echo "SET unique_checks=1;"
echo "SET FOREIGN_KEY_CHECKS=1;"
echo "COMMIT;"
echo "SET AUTOCOMMIT=1;"
) | mysql --user=root --password=############
 

for some reason I hadn't seen

for some reason I hadn't seen the duplication in zabbix/zabbix/auditlog. I volontarly skipped the mysql database coz I had restored it earlier and wanted to validate restoring of individual tables.

Meanwhile, I successfully patched the restore.py thanks to your recomendations and extracted the mymysqldump file.

Now, even with 6GIGs of ram and I don't know how many different permutations of my.cnf there is no way I can do a complete restore without havin mysqld die around 3.2GB of ram even with a direct restore using mysql command line described below and a mymysqldump file of 335MB

I wonder if anyone else has problems restoring a innodb database of this relative size in TKL LAMP 11.1. It's common for people to asume that if the backup works, the restore will lol :-)

In the mean time while I get some more of your wit and tips, I'll go to sleep and try to restore it tomorow in another lamp stack vm to compare the result.

Thanks a zillion and hopefuly we'll get a solution soon to share with others

Champignol

	ulimit -a

	core file size          (blocks, -c) 0
		data seg size           (kbytes, -d) unlimited

scheduling priority             (-e) 20

file size               (blocks, -f) unlimited

pending signals                 (-i) 16382

max locked memory       (kbytes, -l) 64

max memory size         (kbytes, -m) unlimited

open files                      (-n) 1024

pipe size            (512 bytes, -p) 8

POSIX message queues     (bytes, -q) 819200

real-time priority              (-r) 0

stack size              (kbytes, -s) 8192

cpu time               (seconds, -t) unlimited

max user processes              (-u) unlimited

virtual memory          (kbytes, -v) unlimited

file locks                      (-x) unlimited

restore.sh

#!/bin/bash

(

echo "SET AUTOCOMMIT=0;"

echo "SET FOREIGN_KEY_CHECKS=0;"

echo "SET unique_checks=0;"

cat /tmp/mymysqldump

echo "SET unique_checks=1;"

echo "SET FOREIGN_KEY_CHECKS=1;"

echo "COMMIT;"

echo "SET AUTOCOMMIT=1;"

) | mysql --user=root --password=############

Follow up

Hi Liraz,

I have sent you an email please go read it 

Cheers

Champignol

Change my.cnf

Best way to change the config is:

pico /etc/mysql/my.cnf
change max_allowed_packet size
save the my.cnf
then  service mysql restart

That should solve the problem.

Urgent: restore keeps restoring my.cnf

I have an urgent issue, and cannot restore; the above suggestions don't appear to work.  The max_allowed_packet value keeps getting reset to 16M, then the restore dies.  I tried adding "--skip-files" to see if it would leave /etc/my.cnf alone, but I still see max_allowed_packet errors, on the same table in the same database.

Does anyone have any suggestions?  Can I do restores in pieces?  Can I directly access the backup data and do parts of it manually? Are there good examples of the tklbam-restore commands to do so, like restore a particular backup, from a particular tiem, skipping certain files, like /etc/my.cnf?

Jeremy's picture

Have you edited it from the commandline

I would use nano to edit your my.cnf and make sure you save. Perhaps if you stop MySQL first and then edit it, save it and restart MySQL. So something like this:

service mysql stop
nano /etc/mysql/my.cnf
<edit the max_allowed_packet><Ctrl><X>
service mysql start

You can also exclude the MySQL DB (check the TKLBAM docs for details) if you wish which will at least allow you to get your other data back, but obviously won't solve your issue.

Thanks for the suggestion,

Thanks for the suggestion, Jeremy.  I ended just restoring everything but the databases with tklbam, and was lucky enough to have a pretty recent SQL dump on another system.  Using the mysql commandline tools worked fine, even when tklbam continued to complain about packet size (I'd tried editing both via webmin and vim'ing my.cnf directly, then restarting).

So far, every time I've really been in a jam, tklbam hasn't worked for me.  Twice I've had systems run out of disk space, and there wasn't enough room to restore, and this time it was mysql and the packet business.  I guess I'll switch back to snapshots/rsync/manual mysql dumps.

Relatedly, root can no longer log into phpmyadmin, even though it works fine on the command line (mysql -u root -p).  Dunno what happened there, or what tklbam may not have restored.  Maybe the mysql management tables themselves are still fubar, but tklbam won't restore them, so I don't know what to do about that.  I'm perfectly comfortable using the command line mysql client, but I'm concerned about data corruption.

I would love it if tklbam had some kind of extract command, where you could give it a backup id and date, then just pull specific files out, so I could nab just the mysql.* tables and restore them.

Jeremy's picture

Sounds like you haven't had much joy with it...

But I do like your idea of being able to download and extract your TKLBAM backup, so I registered it as a Blueprint.

Also you should be able to reset the MySQL admin password with the firstboot script (IIRC /usr/lib/inithooks/bin/mysql.py) although as you say, it should be that same for the commandline and for phpMyAdmin (and I thought that it always is...)

L. Arnold's picture

Try a Debian (TKL 12) Build then Restore

With version 11 I had ongoing problems with TKLBAM and max-packets etc..  What did work very well was to get my Version 11 Magento upgraded to Magento 1.6.2 then do a TKLBAM Restore to a new Debian Machine.  No hickups what soever.  I even ran the restore from within WebMin.

I do find that certain issues like my.cnf can get lost in such Resores and it is good to have your own internal documentation/checklist to see that you get items back to normal.

If wrestling with max_allowed_packet use Jeremy's recomendation (but likely go bigger than this, like 300mb)

# the following will set max_allowed_packet to 100MB
mysql -p -e 'SET GLOBAL max_allowed_packet=100*1024*1024;'

# the following will verify the mysql server accepted the new setting
mysql -p -e 'SELECT @@global.max_allowed_packet;'

This setting will only last until the mysql server is restarted.

I've only done one Jump to Debian but worked like a charm without setting any packet size.  Try that first.

Maybe an old posting, however I manage it to solve this today

I added the 'max_allowed_packet = 64M' to /etc/mysql/debian.cnf and /etc/mysql/my.cnf (in both the [client] section as in the [mysqld] section where it normally resides from the vanilla distributed my.cnf)

This debian.cnf file is read by mysql.py in /usr/lib/tklbam directory for either restore and backup and should take into account the max_allow_packet instruction. I manage to restore a sql dump without specifying the --max_allowed_packet=64M as an option to mysql and that works fine.

hmmm

Hmm this seems to not to be a very good idea as mysqladmin is started when the mysql service starts and mysqladmin complains that max_allowed_packet is not a recognised option. This is a bummer.

Post new comment

The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <p> <span> <div> <h1> <h2> <h3> <h4> <h5> <h6> <img> <map> <area> <hr> <br> <br /> <ul> <ol> <li> <dl> <dt> <dd> <table> <tr> <td> <em> <b> <u> <i> <strong> <font> <del> <ins> <sub> <sup> <quote> <blockquote> <pre> <address> <code> <cite> <strike> <caption>

More information about formatting options

Leave this field empty. It's part of a security mechanism.
(Dear spammers: moderators are notified of all new posts. Spam is deleted immediately)