MysqlDump: Lock wait timeout exceeded; try restarting transaction

You are here because you got the following error while performing a MysqlDump: Lock wait timeout exceeded; try restarting transaction. What does this error mean and how can you get MysqlDump working again? You have come to the right place to get your answer.

Transactions

The error indicates that the mysqldump timed out while waiting for a transaction to complete. This error shows up when dumping from Innodb tables which support transactions. Transactions are the process by which the database ensures that all the data is properly modified in a table before it moves on to the next data change.

Think of a transaction in the database just like checking out at a grocery store. When you approach the cash register there is typically a line of other customers waiting to check out. The Cashier takes each customer in order and helps them purchase their items. When they have finished their purchase their transaction is complete. And the next customer in line is able to start their transaction.

So what is the Lock Wait Timeout

So when you get the error from MysqlDump: Lock wait timeout exceeded; try restarting transaction, this is why. MysqlDump is attempting to dump the information in the database/table. But to ensure the data is accurate it is attempting to lock the table during the process. This lock stops new transactions from starting and ensures that all previous transactions are complete.

If the lock attempt is timing out, that typically indicates that either the queue for changes was too long. Or a transaction took too long to complete. In either case the MysqlDump process was unable to get a complete copy of all the data. So the process errors out with “Lock wait timeout exceeded; try restarting transaction” and fails.

This error can be especially common when using MysqlDump in a cluster environment. Or on a very busy server with lots of transactions queued up.

How do I avoid Lock wait timeout exceeded errors?

The answer is a simple MysqlDump option that avoids the lock wait process altogether. I have previously briefly described the process in this post about how to Dump All MySQL Databases into Individual SQL Files. Use this MysqlDump option to avoid those errors.

--single-transaction

This simple command option tells MysqlDump to not wait for any table locks. Instead it will choose the most recent completed transaction for the table and export the data at that point. So you may not have the most 100% accurate data. Since transactions are still processing in the database. But you will have data that is 99.999999% accurate, just missing those latest changes.

This is super helpful in my environment taking periodic dumps of my databases at regular intervals. It is much more important that the data is backed up regularly than I have every change up to that second. If there was something changing at the time of the backup it will be changed in the next one. And if the MysqlDump process is failing because of too many changes, then I get no backup at all.

Good luck with your next MysqlDump process. Hopefully you get a good backup and avoid all the lock timeout errors 😉

Getting Fancy with Mysqldump Tricks

Mysqldump tricks help you get the most out of this amazing and essential tool. It is typically used for backing up and migrating MySQL databases and data. But sometimes you may want to get a little fancier then just backing up or restoring a full database. Here are a few Mysqldump tricks to maximize your effectiveness.

Mysqldump Tricks – Dump a single MySQL table

If you want to dump a single table from a MySQL database you can. The following command will help you accomplish this amazing feat:

mysqldump [options] db_name table_name > filename.sql

And just like that you have a file with just that table exported into it. But what if you want to dump more than one table, but not the whole DB. The following command will help you get there:

mysqldump [options] db_name table1_name [table2_name table3_name ...] > filename.sql

Simply adding the table names with spaces between each name will add them to the export file.

Skip specific tables in a DB with Mysqldump

Now that we know how to export only specific tables from a DB. How can we use Mysqldump to export a full DB but exclude one or more tables? Use the following command for a single table:

mysqldump [options] db_name --ignore-table=db_name.table1_name > filename.sql

Or if you want to exclude multiple tables:

mysqldump [options] db_name --ignore-table=db_name.table1_name --ignore-table=db_name.table2_name --ignore-table=db_name.table3_name > filename.sql

Now your Mysqldump powers are increasing. Now let’s move on to something a bit different.

Restore a single table from a Mysqldump File

So you already have a Mysqldump file with multiple tables in it, or a full DB dump. But you only want to restore a single table from the file. Ok, so this isn’t exactly a Mysqldump trick, it’s a Sed trick, but just using a Mysqldump file.

Let’s say you have a Mysqldump file called “filename.sql” and you want to restore only the table named “myFavoriteTable”. Using the command below Sed will copy the correct table contents into the file “myFavoriteTable.sql” so you can restore that file/table individually.

sed -n -e '/CREATE TABLE.*`myFavoriteTable`/,/Table structure for table/p' filename.sql > myFavoriteTable.sql

Now you have several new Mysqldump tricks up your sleeve. Hopefully they save you time and effort in the future.

If you would like to learn how to use compression with mysqldump check out this post.

Make a Full Disk Backup with DD

Recently I had a drive that was showing the early warning signs of failure. So I decided I had better make a backup copy of the drive. And then subsequently push that image onto another drive to avoid failure. Consequently I found that the drive was fine. It was the SATA cable that was failing. But the process helped remind me of what a useful tool dd is. Subsequently it refreshed my knowledge of how to use this remarkable tool. And finally helped remind me how to make a full disk backup with dd.

What is DD?

DD stands for “Data Definition”, it has been around since about 1974. It can be used to read write and convert data between filesystems, folders and other block level devices. As a result dd can be used effectively for copying the content of a partition, obtaining a fixed amount of random data from /dev/random, or performing a byte order transformation on data.

So Lets Make a Full Disk Backup with DD

I will start with the command I used to make a full disk backup with dd. And then give you a breakdown of the different command elements to help you understand what it is doing.

dd if=/dev/sdc conv=sync,noerror status=progress bs=64K | gzip -c > backup_image.img.gz

The command options break down like this:

if=/dev/sdc this defines the “input file” which in this case is the full drive “/dev/sdc”. You could do the same with a single partition like “/dev/sdc1”, but I want all the partitions on the drive stored in the same image.

conv=sync,noerror the “sync” part tells dd to pad each block with nulls, so that if there is an error and the full block cannot be read the original data will be preserved. The “noerror” portion prevents dd from stopping when an error is encountered. The “sync” and “noerror” options are almost always used together.

status=progress tells the command to regularly give an update on how much data has been copied. Without this option the command will still run but it won’t give any output until the command is complete. So making a backup of a very large drive could sit for hours before letting you know it is done. With this option a line like this is constantly updated to let you know how far along the process has gone.

1993998336 bytes (2.0 GB, 1.9 GiB) copied, 59.5038 s, 33.5 MB/s

bs=64K specifies that the “Block Size” of each chunk of data processed will be 64 Kilobytes. The block size can greatly affect the speed of the copy process. A larger block size will typically accelerate the copy process unless the block size is so large that it overwhelms the amount of RAM on your computer.

Making a compressed backup image file

At this point you could use the “of=/dev/sdb” option to output the contents directly to another drive /dev/sdb. But I opted to make an image file of the drive, and piping the dd output through gzip allowed me to compress the resulting image into a much smaller image file.

| gzip -c pipes the output of dd into the gzip command and writes the compressed data to stdout. Other options could be added here to change the compression ratio, but the default compression was sufficient for my needs.

> backup_image.img.gz redirects the output of the gzip command into the backup_image.img.gz file.

With that command complete I had copied my 115GB drive into a 585MB compressed image. Most of the drive had been empty space, but without the compression the image would have been 115GB. So this approach can make a lot of sense if you are planning on keeping the image around. If you are just copying from one drive to another then no compression is needed.

So there you have it, the process of making a full disk backup with dd. But I guess that is only half the story, so now I will share the command I used to restore that image file to another drive with dd.

Restoring a Full Drive Backup with DD

Fortunately the dd restore process is a bit more straightforward than the backup process. So without further adieu here is the command.

gunzip -c backup_image.img.gz | dd of=/dev/sdc status=progress

gunzip -c backup_image.img.gz right off the bat “gunzip” starts decompressing the file “backup_image.img.gz” and the “-c” sends the decompressed output to stdout.

| dd of=/dev/sdc pipes the output from gunzip into the dd command which is only specifying the “output file” of “/dev/sdc”.

status=progress again this option displays some useful stats about how the dd process is proceeding.

Once the has completed the transfer you should be good to go. But a couple caveats to remember. First the drive you restore to should be the same size or larger than the backup drive. Second, if the restore drive is larger, you will end up with empty space after the restore is complete. ie: 115GB image restored to a 200GB drive will result in the first 115GB of the drive being usable, and 85GB of free space at the end of the drive. So you may want to expand the restored partition(s) to fill up the extra space on the new drive with parted, or a similar tool. Lastly, if you use a smaller drive for the restore dd will not warn you that it won’t fit, it will just start copying and will fail when it runs out of space.

Conclusion

DD is an amazing tool that has been around for a while. And it continues to be relevant and useful each day. It can get you out of a bind and save your data, so give it a whirl and see what it can help you with today.

Here are a couple resources that I referenced to help me build my dd command. A guide on making a full metal backup with dd. And a general DD usage guide.

Dump All MySQL Databases into Individual SQL Files

Anyone who is responsible for managing a MySQL database will eventually run into this problem. You either need to dump all your MySQL databases for a backup, or to prepare for an upgrade. Whatever your circumstances are there are several different methods that can be employed to dump your MySQL DBs. Hopefully I can give you the basic tools to get you on your way.

Dump All DBs into a Single SQL File

This first method is what I would call the quick and dirty method. It is straight forward and just dumps all of the databases on a server into a single SQL file. For many individuals this is sufficient, but may not be a good option for larger databases or backup/restore processes. Since all of the DBs are bundled into a single file. Each of the values in brackets “[]” are placeholders for your own values.

mysqldump -u [username] -p --all-databases > allDB.sql

If all you need is to get a quick backup of everything this may be your ticket. The “–all-databases” flag does the magic here, dumping all of the MySQL databases into a single SQL file. More details on this method can be found here. But if you are wanting to be able to easily restore an individual database you may want to use an approach like this.

Dump All MySQL Databases into Individual Files

for I in $(mysql -u [username] -p[mypassword] -h [Hostname/IP] -e 'show databases' -s --skip-column-names); do mysqldump -u [username] -p[mypassword] -h [Hostname/IP] $I > "/home/user/$I.sql"; done

This command first calls “mysql” and gets a list of databases on the server. The command feeds that list of MySQL databases into “mysqldump” to get an individual SQL file for each database on the server. Finally those SQL files are then saved in the location indicated, “/home/user/dbname.sql” in this example.

A few things to note in this command are that first you will notice that I have included the password in the command. The command will complain that using the password on the command line is not safe. However it is required for the command to work. The “-p” is immediately followed by the password, without a space. This is how the option functions, if you add a space it will not work.

The “$I” is a variable and it will have all the database names in it, as it iterates through your listing of DBs. So as you modify the command to fit your specific setup, just make sure to keep that for consistency.

Additional Mysqldump Options

There are a couple of additional mysqldump options that you may want to add depending on your requirements.

--single-transaction

This option keeps mysqldump from trying to get a complete lock on the database. It tells mysqldump to just grab a single transaction and dump the DB contents at the time of that transaction. This can be especially helpful when you are dumping a DB that is especially large, or on a very busy server. Without this option mysqldump may timeout while waiting to get a lock on the DB. For more details on how this option works check out this post.

--default-character-set=utf8mb4 

If you are working with data that may contain emoticons you will want this flag. This ensures that your dumped sql file has the correct information to recreate the emoticons when the file gets reimported. Without this option, your emoticons will show up as strange/random text characters when you restore your backup.

Adding Compression

I don’t typically use compression on my SQL dumps since I like to use rdiff-backup as my backup mechanism. But for those who would like to compress their mysqldump in one step here is the basic gist of it.

mysqldump -u [username] -p --all-databases | gzip > allDB.sql.gz

You just pipe the output from the mysqldump command into gzip, or bzip2 to compress the contents. That can be easily added to the command above to dump all MySQL databases into individual files like so.

for I in $(mysql -u [username] -p[mypassword] -h [Hostname/IP] -e 'show databases' -s --skip-column-names); do mysqldump -u [username] -p[mypassword] -h [Hostname/IP] $I | gzip > "/home/user/$I.sql.gz"; done

Hopefully these examples help you get the backups you need. Have some fun along the way.

How to Use Rdiff-backup – A Simply Powerful Backup Tool

In this post I hope to help you understand how to use Rdiff-backup. I stumbled across Rdiff-backup several years ago. It has helped me streamline and simplify most of my file based backup processes from Linux servers. As the name implies, a diff is performed on the files being backed up, so only differences get backed up. As you can probably guess only storing the diffs of changes can lead to a much smaller backup.

Some advantages of Rdiff-backup is that it utilizes rsync, so it can quickly and easily mirror a directory. Backups can happen in seconds if there have been few changes. Your data all travels over a secure ssh connection, so your files are safe during transport. And being a simple command line tool, you can easily script out a backup scenario. Or just use it straight from your terminal.

How I Use Rdiff-backup

I typically use Rdiff-backup with my web hosting clients, it allows for quick backup and restore of their web files. And because most of the the files don’t change from day to day the backup is lightning fast.

rdiff-backup --exclude '**cache/' --exclude '**debug.log' /var/www user@ipAddress::/home/user/backup
rdiff-backup --remove-older-than 52W user@ipAddress::/home/user/backup

Let’s break down the command(s), the first Rdiff-backup command has two “–exclude” options. Those options will as indicated exclude the referenced files or directories from the backup. The exclude options can either have a full relative directory structure or in this case a “**” will match any path. A single asterisk “*” could also be used, but it matches any part of a path not containing a “/” in it.

The “/var/www” part of the command is the directory to backup. Using standard ssh authentication methods “user@ipAddress” ie:”bob@10.1.1.1″ for the login credentials. And then a double colon, which is different from normal rsync/scp formatting. The “::” proceeds the backup destination directory.

So to explain in basic terms, the first command will backup everything under the “/var/www” directory, except any directory ending in “cache/” or file ending in “debug.log”. The backup will be made in the “/home/user/backup” directory.

Backup Lifecycle

Now that you have a backup created how do you manage how long the backup will be kept? Without a command like the second one above, the Rdiff-backup will be kept indefinitely. But adding the second command helps us manage the how long to keep the backups.

rdiff-backup --remove-older-than 52W user@ipAddress::/home/user/backup

This command is a little simpler than the first, it doesn’t specify a source directory. But rather only specifies how long to keep files. The “–remove-older-than” option can take a number of different options. I like to keep my backups for a year, so “52W” gives me 52 weeks of backups. Any existing diffs older than the specified time are removed. Other options are s, m, h, D, W, M, or Y (indicating seconds, minutes, hours, days, weeks, months, or years respectively). Additionally a “B” can be used to indicate the number of backups, ie: “3B” would keep the last three backups.

After specifying the number or timeframe of backups to keep, the only other thing to specify is the backup location. The trimming of backup content is then performed on the given location.

Performing a Restore

So now you have your content backed up, and you need to restore something. A backup is only as good as the data you can restore out of it right? Fortunately the restore process fairly simple as well.

rdiff-backup -r 5D user@ipAddress::/home/user/backup/example.com/index.php /home/localUser/www/restore/

The “-r” option tells Rdiff-backup to restore files, and uses the same time format as the delete option above. In this case we are restoring a version of the file “example.com/index.php” from 5 days ago. The file is being restored/copied to “/home/localUser/www/restore/”. The same can be done for an entire directory structure.

rdiff-backup -r 3B user@ipAddress::/home/user/backup/example.com/ /home/localUser/www/restore/

This command restores/copies all the contents of the example.com directory to “/home/localUser/www/restore/” from 3 Backups ago. Or if you are hunting for a specific day you can always do something like this.

rdiff-backup -r 03-05-2020 user@ipAddress::/home/user/backup/example.com/ /home/localUser/www/restore/

That will perform the same restore, but specifically as of the 5th of March 2020. The date used can be “03/05/2020” or “2020-03-05”, and all indicate midnight as of that day.

For a full rundown on all the options and other details for Rdiff-backup check out the project documentation.

There you have a basic rundown on how to use Rdiff-backup. I find it a very useful and powerful tool, and hope it will help you keep your backups running.