Forum: 
Tags: 

Error Serializing MySQL database to /TKLBAM/myfs

Ronan0's picture

Serializing MySQL database to /TKLBAM/myfs
------------------------------------------

Traceback (most recent call last):
  File "/usr/bin/tklbam-backup", line 510, in <module>
    main()
  File "/usr/bin/tklbam-backup", line 443, in main
    opt_resume, True, dump_path if dump_path else "/")
  File "/usr/lib/tklbam/backup.py", line 237, in __init__
    self._create_extras(extras_paths, profile_paths, backup_conf)
  File "/usr/lib/tklbam/backup.py", line 183, in _create_extras
    limits=conf.overrides.mydb, callback=mysql.cb_print()) if self.verbose else None
  File "/usr/lib/tklbam/mysql.py", line 553, in backup
    mna = MysqlNoAuth()
  File "/usr/lib/tklbam/mysql.py", line 698, in __init__
    self.orig_varrun_mode = stat.S_IMODE(os.stat(self.PATH_VARRUN).st_mode)
    
OSError: [Errno 2] No such file or directory: '/var/run/mysqld'

Exception AttributeError: "MysqlNoAuth instance has no attribute 'orig_varrun_mode'" in <bound method MysqlNoAuth.__del__ of <mysql.MysqlNoAuth instance at 0x7fa2f396e560>> ignored

 

I have mysqld in the following location - /usr/sbin/mysqld. 

Ronan0's picture

Additionally, after trying to run this backup, it has done something to my set up so that my systems cannot connect:

 

java.sql.SQLException: unable to get a connection from pool of a PoolingDataSource containing an XAPool of resource DEFAULT_transactional_DS with 0 connection(s) (0 still available)

 

om.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

 

java.net.ConnectException: Connection refused
Ronan0's picture

A server restart, everything is fine. Does TKLBAM try to stop mysqld or something? 

(This server runs applications off Tomcat btw. )

Ronan0's picture

Any ideas on this? Stopping Tomcat before the backup does not help either. Only other way this server differs from my others is that it is from OVH. Thanks.

Ronan0's picture

Bump: OSError: [Errno 2] No such file or directory: '/var/run/mysqld'

rest of error as per OP.

Can't work out why I can't get this server to backup on TKLBAM.

Jeremy Davis's picture

As per the subject line, my guess is that when TKLBAM dumps the DB your system is running out of RAM which is causing MySQL to crash. Once it crashes, TKLBAM can't get any more DB data (so it too crashes). When you try to access the server, the website isn't working (because MySQL isn't running).

When you reboot, everything seems fine again (MySQL is running again) until next time you run out of RAM...

Obviously I'm only guessing and I can't be 100% sure, but going from experience, I think it's likely I'm right...

Ronan0's picture

This is a 6GB memory machine. The highest memory has ever peaked is 55%. There's a fairly small database on it.

Looking at the monitor from yesterday when I was trying this backup again, memory peaked at 28.75%.

So, you appear to understandfrom the error message that mySQL is crashing. - But there has to be something else causing it aside from a memory issue. 

Jeremy Davis's picture

Perhaps try checking the logs?

As a general rule most logs (especially system ones and most Debian packages) can be found in /var/log . Probably the one that will be most helpful here is the MySQL log. IIRC it's something like /var/log/mysql.log (or similar).

Hopefully that may give some clues.

Ronan0's picture

Thanks for that.

It seems to be denying access. Do I need to somehow put the root password into TKLBAM? 

2017-05-30T08:24:15.435239Z 23723 [Note] Access denied for user 'root'@'localhost' (using password: NO)
2017-05-30T08:24:15.441292Z 23724 [Note] Access denied for user 'root'@'localhost' (using password: NO)
2017-05-30T08:24:15.452058Z 23725 [Note] Access denied for user 'root'@'localhost' (using password: NO)
2017-05-30T08:24:15.457254Z 23726 [Note] Access denied for user 'root'@'localhost' (using password: NO)
2017-05-30T08:24:15.458883Z 0 [Note] Giving 0 client threads a chance to die gracefully
2017-05-30T08:24:15.458917Z 0 [Note] Shutting down slave threads
2017-05-30T08:24:15.458931Z 0 [Note] Forcefully disconnecting 0 remaining clients
2017-05-30T08:24:15.458955Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2017-05-30T08:24:15.459033Z 0 [Note] Binlog end
2017-05-30T08:24:15.501800Z 0 [Note] Shutting down plugin...............................................

Jeremy Davis's picture

By default TKLBAM should be using the hidden "system maintenance" account (special root account only available locally which can access MySQL using a stored password).

But it certainly looks like something is trying to log in to MySQL using the root account (but no password)!

Also perhaps there is a separate MySQL error log? Actually, after the service has crashed, but before you restart it, if you run 'service mysql status' that should give you the last few lines of log for the MySQL daemon. Even if that doesn't give you a ton of clues, it should also give you a hint on what command to run to get further daemon log entries. You could also manually check the daemon log, but using SystemD is probably easier as it will only give you the MySQL related stuff.

Ronan0's picture

Thanks. I will look further into the logs. 

The above was copied from the mySQL error log.

Could TKLBAM trigger some other process to try and log in to mySQL?

The only other user account in this installation of mySQL is mysql.sys@localhost - is that the system maintenance account you mean that TKLBAM uses? 

Jeremy Davis's picture

TBH I thought it was something more like debian-sys-maint but perhaps that's changed?

Sorry I thought that looked more like a general log rather than an error log. It certainly seems like it was shutting down though. So perhaps look higher up/further back in that log (if there is higher up/further back). It'd be interesting to see what else was happening at "08:24:15", or just before...

If that is all there is, then perhaps have a look for a log file with the same name but with a number appended on the end. There may also be files with a .gz on the end. The files ending in /gz have been compressed so you'll need to use zcat to view them (rather than cat or tail).

Also IIRC MySQL has 2 logs, an error.log and a more general log. So I just checked on a server that I have running here and I actually see 3 log files (all in /var/log): mysql.log, mysql.err and mysql/error.log. On my system both mysql.log, mysql.err are empty but mysql/error.log has a few entires, so I'm guessing that's the one you're looking at. It may be worth double checking those other 2 log files just in case.

Also there are other logs which may give additional relevant info. daemon.log (log of all services), syslog (general system log), dmesg (kernel messages - maybe there is something funky going on with hardware?). I'd be looking for time stamps that roughly match what is in your MySQL error log (i.e. around 08:24 2017-05-30). Note that some of the logs may have a slightly different timestamp format. dmesg actually uses a completely different time scale (time in milliseconds since boot).

Once you think you may have found something of interest, to check, reboot your server (so everything is running nicely). Then log into your server using multiple separate SSH sessions (one for each log you wish to watch, plus one extra to run the backup). In all but one of the sessions, tee up the log files to display in real time. Do that like this:

 tail -f /var/log/<name-of-log-file>

That should display the last few lines of the log file, then sit there waiting. When new log entries are written, they'll display here instantly. To exit, press <Ctrl>-;<C>.

Once you have all that set up, manually trigger a TKLBAM backup in the remaining session ('tklbam-backup'). Hopefully that will give you some clearer idea of what is actually happening and when...

If you find anything that seems relevant, please post (probably better to err on the side of sharing too much, than not enough).

Actually, one other thing that occurs to me is that perhaps you are running out of HDD space? That would cause similar behaviour (although TBH seems unlikely as there'd probably also be other warnings). To double check, try this:

df -h

Look for the volume mounted on '/' (should be the first entry under the headings). Ideally that shouldn't be too much more that about 70% (to leave headroom for TKLBAM backups and restores) although it probably shouldn't cause immediate issues so long there is enough free room to create your backup.

Post new comment