Cloned Ubuntu DHCP Issues

I just ran into an interesting issue with DHCP. I maintain a cluster of Ubuntu servers for a MooseFS cluster. MooseFS is pretty amazing, it is a clustered filesystem that allows simultaneous access to files from multiple clients. One of the great features of MooseFS is that it has great performance on commodity hardware. So it is relatively inexpensive to Set Up, and Manage. After using it for a while in our initial configuration we decided it was time to upgrade. Specifically to add some additional space in the form of three new chunk server nodes.

The beauty of MooseFS is that space, redundancy and performance can be added by simply adding more nodes. So to make things simple (Not that it is difficult) after we provisioned some new hardware. I simply cloned of one of the existing Chunk servers onto the new hardware. I didn’t realize that I would run into cloned Ubuntu DHCP issues.

Discovering the Cloned Ubuntu DHCP Issues

After I finished the cloning, I plugged in the new server on my work bench to finish up the config changes. Our server setup had a static address for the interfaces we have bonded to handle the MooseFS traffic. But I set the management interface to use DHCP.

The configuration process for the first server went quickly and I left it running while I started on server #2. As soon as server #2 booted up I realized that it had the same IP address as the first server. That sent me checking both servers for any static IP settings or other possible conflicts. But seeing no issues in the config I also checked our DHCP server and noticed that there were several leases to both servers with the same IP.

This got me digging around online for a bit for an answer. I have been using Linux for decades, so some of the newer systems/services keep me learning. In this case it was Netplan, and I installed it with Ubuntu on this machine. I found that Netplan does not use the interface MAC address as the default identifier for DHCP. It uses a generated DUID instead of the MAC address unless you specify otherwise.

I found the issue, as I cloned the chunk server I also cloned the DUID that NetPlan was using. So each clone looked the same to the DHCP server.

Fixing the DHCP Issues

Fortunately the fix for this problem is very simple. The DHCP entry needs to have:

dhcp-identifier: mac

With that line added to your Netplan setup it will default to the MAC address for the interface and your issue should be resolved.

Here is the full config with the new line in place for context. I hope it helps someone avoid the same issue with your own setup.

network:
  bonds:
    bond0:
      addresses:
      - 172.16.10.25/23
      interfaces:
      - ens1
      - ens1d1
      nameservers:
        addresses:
        - 172.16.10.2
        - 172.16.10.3
        search:
        - bdoga.com
      parameters:
        lacp-rate: fast
        mode: 802.3ad
        transmit-hash-policy: layer3+4
  ethernets:
    eno1: 
      dhcp4: yes
      dhcp-identifier: mac
    ens1: {}
    ens1d1: {}
  version: 2 

If that helped I also recommend this article I wrote about finding a replacement for Netstat.

MysqlDump: Lock wait timeout exceeded; try restarting transaction

You are here because you got the following error while performing a MysqlDump: Lock wait timeout exceeded; try restarting transaction. What does this error mean and how can you get MysqlDump working again? You have come to the right place to get your answer.

Transactions

The error indicates that the mysqldump timed out while waiting for a transaction to complete. This error shows up when dumping from Innodb tables which support transactions. Transactions are the process by which the database ensures that all the data is properly modified in a table before it moves on to the next data change.

Think of a transaction in the database just like checking out at a grocery store. When you approach the cash register there is typically a line of other customers waiting to check out. The Cashier takes each customer in order and helps them purchase their items. When they have finished their purchase their transaction is complete. And the next customer in line is able to start their transaction.

So what is the Lock Wait Timeout

So when you get the error from MysqlDump: Lock wait timeout exceeded; try restarting transaction, this is why. MysqlDump is attempting to dump the information in the database/table. But to ensure the data is accurate it is attempting to lock the table during the process. This lock stops new transactions from starting and ensures that all previous transactions are complete.

If the lock attempt is timing out, that typically indicates that either the queue for changes was too long. Or a transaction took too long to complete. In either case the MysqlDump process was unable to get a complete copy of all the data. So the process errors out with “Lock wait timeout exceeded; try restarting transaction” and fails.

This error can be especially common when using MysqlDump in a cluster environment. Or on a very busy server with lots of transactions queued up.

How do I avoid Lock wait timeout exceeded errors?

The answer is a simple MysqlDump option that avoids the lock wait process altogether. I have previously briefly described the process in this post about how to Dump All MySQL Databases into Individual SQL Files. Use this MysqlDump option to avoid those errors.

--single-transaction

This simple command option tells MysqlDump to not wait for any table locks. Instead it will choose the most recent completed transaction for the table and export the data at that point. So you may not have the most 100% accurate data. Since transactions are still processing in the database. But you will have data that is 99.999999% accurate, just missing those latest changes.

This is super helpful in my environment taking periodic dumps of my databases at regular intervals. It is much more important that the data is backed up regularly than I have every change up to that second. If there was something changing at the time of the backup it will be changed in the next one. And if the MysqlDump process is failing because of too many changes, then I get no backup at all.

Good luck with your next MysqlDump process. Hopefully you get a good backup and avoid all the lock timeout errors 😉

Getting Fancy with Mysqldump Tricks

Mysqldump tricks help you get the most out of this amazing and essential tool. It is typically used for backing up and migrating MySQL databases and data. But sometimes you may want to get a little fancier then just backing up or restoring a full database. Here are a few Mysqldump tricks to maximize your effectiveness.

Mysqldump Tricks – Dump a single MySQL table

If you want to dump a single table from a MySQL database you can. The following command will help you accomplish this amazing feat:

mysqldump [options] db_name table_name > filename.sql

And just like that you have a file with just that table exported into it. But what if you want to dump more than one table, but not the whole DB. The following command will help you get there:

mysqldump [options] db_name table1_name [table2_name table3_name ...] > filename.sql

Simply adding the table names with spaces between each name will add them to the export file.

Skip specific tables in a DB with Mysqldump

Now that we know how to export only specific tables from a DB. How can we use Mysqldump to export a full DB but exclude one or more tables? Use the following command for a single table:

mysqldump [options] db_name --ignore-table=db_name.table1_name > filename.sql

Or if you want to exclude multiple tables:

mysqldump [options] db_name --ignore-table=db_name.table1_name --ignore-table=db_name.table2_name --ignore-table=db_name.table3_name > filename.sql

Now your Mysqldump powers are increasing. Now let’s move on to something a bit different.

Restore a single table from a Mysqldump File

So you already have a Mysqldump file with multiple tables in it, or a full DB dump. But you only want to restore a single table from the file. Ok, so this isn’t exactly a Mysqldump trick, it’s a Sed trick, but just using a Mysqldump file.

Let’s say you have a Mysqldump file called “filename.sql” and you want to restore only the table named “myFavoriteTable”. Using the command below Sed will copy the correct table contents into the file “myFavoriteTable.sql” so you can restore that file/table individually.

sed -n -e '/CREATE TABLE.*`myFavoriteTable`/,/Table structure for table/p' filename.sql > myFavoriteTable.sql

Now you have several new Mysqldump tricks up your sleeve. Hopefully they save you time and effort in the future.

If you would like to learn how to use compression with mysqldump check out this post.

How To Run Program Before Login Prompt Ubuntu

I recently installed a new server in my home office. I typically just leave my servers to run headless. But with an old monitor laying around and plenty of idle CPU time I decided to play a bit. I mounted the monitor to my office rack and then started to work.

Rather than just display the normal text login prompt, I wanted it to show something cool at boot. I started to dig around on the web and found this article. It quickly described how to run a program before the login prompt on Ubuntu 16.04+.

Run Your Program before login

So I wrote a simple script /root/loginMatrix.sh which would simply run cmatrix on the main (tty1) console. Once I exited cmatrix it would display the normal login prompt. The sample script is as follows:

#!/bin/sh
/usr/bin/cmatrix -abs
exec /bin/login

I then edited the config file for getty@tty1 here (for Ubuntu 16.04+ only, not sure on other distrubutions):

/etc/systemd/system/getty@tty1.service.d/override.conf

I changed the contents to be:

[Service]
ExecStart=
ExecStart=-/root/loginMatrix.sh
StandardInput=tty
StandardOutput=tty

and then I ran the following command to activate it:

systemctl daemon-reload; systemctl restart getty@tty1.service

After the change the system started to show the cmatrix terminal animation immediately. But once I quit the application it was back to the login prompt.

how to run a program before the login prompt - Cmatrix running on the console prior to the login prompt
CMATRIX running on the console

Getting tricky

After running cmatrix for a few days straight, I decided that I wanted to change it up a bit. So I made a few adjustments to the /root/loginMatrix.sh script to make it a bit more dynamic. With the following changes I was now able to display something different each time I used the command prompt.

 #!/bin/bash
 declare -a arr=("/usr/bin/cmatrix -abs" "/snap/bin/asciiquarium" "/usr/sbin/iftop" "/usr/bin/htop")
 size=${#arr[@]}
 index=$(($RANDOM % $size))
 eval "${arr[$index]}"
 
 exec /bin/login 

These changes told the script to randomly choose either, cmatrix, asciiquarium, iftop, or htop and execute it. Then as before once I quit the application that was randomly executed it would again display the login prompt. My kids got way to excited when asciiquarium was chosen and had to watch the fish swim by. This solution worked for a while, but eventually I got tired of having to change the displayed program manually. So I started playing with options to automate the program change.

Automating the switch

These changes got a bit trickier. The script had to track the application PID so it could kill it when the timeout was reached. After trying several different methods I finally ran across this basic method for timing out a process. And the process isn’t perfect, but it does rotate through the different options on a ten minute interval. So that works, but the exiting to the login prompt doesn’t. So it’s only most of the way there. Here is my current /root/loginMatrix.sh script:

#!/bin/bash
declare -a arr=("/usr/bin/cmatrix -abs" "/snap/bin/asciiquarium" "/usr/sbin/iftop" "/usr/bin/cacafire")
size=${#arr[@]}
continue=1
timeout=600
interval=1
 
while [ $continue -eq 1 ]
do
index=$(($RANDOM % $size))
eval "${arr[$index]} &"
cmdpid=$!

 ((t = timeout))
 
     while ((t > 0)); do
         sleep 1
         kill -0 $cmdpid || exit 0
         ((t -= interval))
     done

 exit_status=$?
 echo $exit_status > ext.txt 
 if [[ $exit_status -ne 1 ]]; then
     continue=0
 fi
 
 kill -s SIGTERM $cmdpid && kill -0 $cmdpid || exit 0
 sleep 1
 #kill -s SIGKILL $cmdpid
 
 done
 
 exec /bin/login 

So this script accomplishes the switching of applications on the primary console. And I was able to add cacafire to the mix for a nice colored ascii fire animation. But if I have to use the console for the login, I will have to hit ctrl-alt-F2 and switch over to tty2. That won’t be the end of the world, lol. And in the meantime I have some fun console effects to keep my office interesting.

how to run a program before the login prompt - Cacafire running on the console prior to the login prompt
CACAFIRE running on the console

Did you like this article on how to run a program before the login prompt? If so you may like this article on how to change your hostname on Centos

How To Use Rsync Between Computers

If you are new to Rsync, please visit our How To Use Rsync – The Basics post. In it we break down what Rsync is and its basic usage. It will provide you with a good background to understand the details of using Rsync between computers.

Rsync Between Remote Computers

Although Rsync does a great job of synchronizing files between local folders it really shines when working between remote computers. And if you are familiar with using ssh from the command line, you will find it relatively easy to use Rsync.

The basic command is pretty simple, and so long as you have ssh available and rsync installed on the remote machine(s) this format will work.

From a remote source:

rsync [options] [user]@[source computer]:[source folder] [destination folder].

Or to a remote source:

rsync [options] [source folder] [user]@[destination computer]:[destination folder].

Or between two remote computers:

rsync [options] [user]@[source computer]:[source folder] [user]@[destination computer]:[destination folder]

Rsync Between Remote Computers with SSH Examples

rsync user@192.168.1.1:~/source/file /home/user/destination/
rsync /home/bdoga/source/file user@192.168.1.2:~/destination/
rsync user@192.168.1.1:~/source/file user@192.168.1.2:~/destination/

In these examples the “file” will be placed in the destination directory on either the local or remote computers. Also for the remote machines you will notice that a single “:” colon was used. This indicates that rsync should use a remote shell, typically SSH to make the connection. And it will fire up rsync on the remote side of the connection to handle the details. Additionally you can force the connection to use an rsync daemon by specifying a “::” double colon instead.

Using the native rsync protocol alone is a little faster, because it doesn’t have any SSH connection overhead. But it also is not an encrypted connection, so there are trade offs to either option. I typically just use the SSH option since I typically have SSH already available and configured on my servers.

Some more useful options

I already discussed the “-a” archive option, in my Rsync Basics post. But it is my goto option for ensuring an exact copy, permissions and all is made. Now that we are connecting to a remote machine, the “-z” (zip) option gets the chance to shine a bit. When you are transferring data over the internet you may not always have a fast connection. The Zip option will ensure that, potentially, much less bandwidth is required to transfer your data.

Another option that is sometimes useful with remote connections is the “-P” (–Progress –Partial) option. This will display the current progress of the file that is being copied. And it will keep “partial” copies of files if a transfer gets interrupted during the sync. In my opinion the Progress that is displayed is great if you are transferring larger files. But if you are moving lots of little files the output is not very useful. And the overhead to produce the Progress output can cause some noticeable slowdown in a transfer.

One additional par of options are the –include, and –exclude options. They are pretty self explanatory, in that they allow you to include or exclude specific files from your sync. These options can be used to fine tune what you are copying from a directory, and ensure you only get what you want. –include ‘R‘ –exclude ‘

More Remote Computer Rsync Examples

rsync -avzhP user@192.168.1.1:/home/user/source/ /home/user/destination/
rsync -avzhP --include '*.sql' --exclude 'dbname*.sql' user@192.168.1.1:/home/user/source/ /home/user/destination/

In the above example only .sql files would be copied from the source. But no .sql files where the file name started with “dbname” would be copied. Or you could add multiple entries to ensure you got all the files you needed in one go.

rsync -avzhP --include '*.html' --include '*.php' user@192.168.1.1:/home/user/source/ /home/user/destination/

In this next example, all .html and .php files will be copied. But no other files.

Conclusion

Rsync continues to be a super useful utility in your systems administration toolkit. Now that you have a good understanding of its usage you are ready to tackle some of Rsync’s more advanced features. Or learn how other programs like Rdiff-backup build upon it to create an awesome tools. And a big thanks to some other sites which we have referenced over the years. Check them out here, and here.

How To Use Rsync – The Basics

Rsync is one of the most useful tools for a systems administrator. Regardless of what your specific roll or responsibility is. At some point you are going to need to copy the data from one place to another. And Rsync is the tool which will help ensure you quickly and accurately make a copy of your data. So in this post I hope to convey how to use Rsync, but focusing on the basic uses that I find most helpful each day.

What is Rsync

Rsync was initially built as a basic clone of “rcp” (Remote Copy) but with a handful of additional features. That handful of additional features has expanded over the years and made Rsync an indispensable tool. This simple tool can be used to copy files between directories on a local computer. Or you can use it to copy files to and from remote systems. My favorite part of Rsync is its ability to quickly compare the source and target locations. This ensures that only new, updated, or other file changes are transferred. Helping you save time and bandwidth when copying larger numbers of files.

So How do I Use Rsync?

The basic command is pretty simple, rsync [options] [source] [destination], and in this simple form you can easily copy data between local directories. ie:

rsync /home/bdoga/source/file /home/bdoga/destination/

This command will take “file” and place it inside the “/home/bdoga/destination/” directory. If you instead would like to copy all of the contents of one directory into another you simply need to add the “-r” (recursive) option. ie:

rsync -r /home/bdoga/source/ /home/bdoga/destination/

Thus all of the contents of “/home/bdoga/source/” will now be copied into “/home/bdoga/destination”. But it is important to note, that if a file with an identical name exists in the destination, it will be overwritten. In addition the “-r” option does not preserve ownership, permissions, or access/modification timestamps. But that is where the next option comes in “-a” (archive).

It is also important to note that if you want to copy just the contents of the source directory, you must end with a trailing “/”. If you fail to add the trailing “/” Rsync will copy the specified directory as well as the contents into the destination. Rather than just the contents of the directory.

The most useful options

The archive option not only copies the files recursively, but also preserves the file permissions and timestamps. I find this the most useful option because when I want to copy a source directory I typically want to be able to restore it with the permissions intact.

Another option that is sometimes useful, depending on the scenario is the “-z” (zip) option. It instructs Rsync to compress the files being copied to ensure they use less bandwidth. Not always useful when copying files over a Gigabit or faster lan, but can be helpful over a slower internet connection.

The next most useful option I frequently use is “-v” (verbose) which tells Rsync to give you more information about the files being transferred. This can be useful to see exactly what is being transferred. It also lets you know exactly what was and was not copied if there is an issue.

And then there is the “-h” (Human Readable) option which makes sure that all numbers/sizes are printed in an easily readable format. For instance rather than reporting that 856342348 bytes were transferred, it would report 816.67 MB were transferred.

And all of these options can be used together as needed. As in this example which will recursively transfer the files while preserving their permissions and timestamps. Also giving verbose output and zipping the files during transfer.

rsync -avzh /home/bdoga/source/ /home/bdoga/destination/

Sample Command Output

 
 root@bdoga:~/test# ls -lah test1
 total 4.0K
 drwxr-xr-x 3 root  root    76 Dec 21 18:33 .
 drwxr-xr-x 4 root  root    32 Dec 21 17:45 ..
 -rw-r--r-- 1 bdoga bdoga    7 Dec 21 17:47 bob
 -rw-r--r-- 1 bdoga bdoga    0 Dec 21 17:46 doug
 drwxr-xr-x 2 root  root    18 Dec 21 17:46 subdir
 -rw-r--r-- 1 bdoga bdoga  10M Dec 21 18:33 test.img
 -rw-r--r-- 1 bdoga bdoga 100M Dec 21 18:33 test2.img
 
 root@bdoga:~/test# rsync -avh ./test1/ ./test2
 sending incremental file list
 ./
 bob
 doug
 test.img
 test2.img
 subdir/
 subdir/file
 
 sent 115.37M bytes  received 122 bytes  46.15M bytes/sec
 total size is 115.34M  speedup is 1.00

 root@bdoga:~/test# rm -rf test2/*
 root@bdoga:~/test# rsync -avzh ./test1/ ./test2
 sending incremental file list
 ./
 bob
 doug
 test.img
 test2.img
 subdir/
 subdir/file
 
 sent 112.61K bytes  received 122 bytes  25.05K bytes/sec
 total size is 115.34M  speedup is 1,023.21
 
 root@bdoga:~/test# ls -lah test2
 total 111M
 drwxr-xr-x 3 root  root    76 Dec 21 18:33 .
 drwxr-xr-x 4 root  root    32 Dec 21 17:45 ..
 -rw-r--r-- 1 bdoga bdoga    7 Dec 21 17:47 bob
 -rw-r--r-- 1 bdoga bdoga    0 Dec 21 17:46 doug
 drwxr-xr-x 2 root  root    18 Dec 21 17:46 subdir
 -rw-r--r-- 1 bdoga bdoga  10M Dec 21 18:33 test.img
 -rw-r--r-- 1 bdoga bdoga 100M Dec 21 18:33 test2.img 

The above command output shows the contents of the source and destination directories. And also shows the difference between running rsync with and without the “-z” option.

Conclusion

Rsync will become a super useful part of your systems administration toolkit. Now that you have a basic understanding of how to use Rsync you are ready to see how to connect to a remote computer. Or learn how other programs like Rdiff-backup build upon it to create an awesome tools. And a big thanks to some other sites which we have referenced over the years. Check them out here, and here.

Change Your Hostname in CentOS 8

Changing your computer or servers hostname is an infrequent activity for most. But if you are like me periodically I will hastily provision a VM. And only realize after the provisioning is complete that I should have used a more descriptive hostname. Or to have chosen a hostname that fits in the theme of the other servers (Middle Earth, Stormlight Archive, Planets, etc…). But sometimes that process can be tedious and end up with you questioning if you got it right. Fortunately it is easy to change your hostname in CentOS 8.

The ever useful “hostnamectl” command makes this a simple process. If you execute the command with no options it will give you the current hostname as well as many details about the system.

[bdoga@host ~]$ hostnamectl
   Static hostname: host.bdoga.local
         Icon name: computer-vm
           Chassis: vm
        Machine ID: b1ce9c049f6d4a9589ad540ae9aa1c43
           Boot ID: 1906ec0120c246aa84bd407e46a237b6
    Virtualization: kvm
  Operating System: CentOS Linux 8 (Core)
       CPE OS Name: cpe:/o:centos:centos:8
            Kernel: Linux 4.18.0-147.8.1.el8.lve.1.x86_64
      Architecture: x86-64

Change Your Hostname in CentOS 8

As shown in the example above, this servers hostname is “host.bdoga.local”. But I am ready for a change, and want to start naming my servers with Stormlight Archive Names. One of my favorite characters is Kaladin, and I want to have this server on my full domain “bdoga.com”. So to change the domain name to “Kaladin.bdoga.com” I would issue the following command.

[bdoga@host ~]$ sudo hostnamectl set-hostname kaladin.bdoga.com

After issuing the command you will not see any sort of confirmation. You should just be greeted with an empty command prompt, but with your new hostname.

[bdoga@host ~]$ sudo hostnamectl set-hostname kaladin.bdoga.com
[bdoga@kaladin ~]$

And there you have it, you have changed your hostname in CentOS 8. This method should also work for Ubuntu 16.04+, Debian 8.0+, CentOS 7+, and other Systemd based systems.

To learn some more details about this and other tools for changing your hostname on Centos 8 please visit linuxize’s post.

And feel free to check out some more of our content regarding CentOS based systems. Or visit some of our posts that will help you increase your Command Line prowess.

Fix Apt NO_PUBKEY Error

If you have used Debian, Ubuntu, Mint or any other linux distribution that uses APT based package management system. You are sure to have run into the NO_PUBKEY error. It can be marginally frustrating but fortunately it can be easy to fix the apt NO_PUBKEY error and get your system back up and ready to roll.

What is the NO_PUBKEY error?

The APT NO_PUBKEY error shows up when the public/private key pair has changed for one of your APT repositories. When this happens, if your local system or server does not have the correct public key, then it cannot verify the repository. And therefore you get the error. This process is in place to ensure you don’t accidentally download packages from an unknown APT source.

Fix the NO_PUBKEY error

There is a simple command that you can run to download the missing public key from one of the APT key servers. You will just need to replace the portion of the command that says “THE_MISSING_KEY_HERE” with the key that is reported in the error.

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys THE_MISSING_KEY_HERE

So if you receive the following error

W: Failed to fetch http://ppa.launchpad.net/myrepository/apps/ubuntu/dists/bionic/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY EA8CACC073C3DB2A

you would run the following command to get the working public key for the apt repository.

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EA8CACC073C3DB2A

After the key has been updated you can then run your “apt update” and it should complete successfully.

Fix Multiple Keys with One Command

The following command can be used to fix multiple NO_PUBKEY errors with one command. Or can be used to fix a single NO_PUBKEY error without having to edit the command. It might be overkill but will still get the job done.

sudo apt update 2>&1 1>/dev/null | sed -ne 's/.*NO_PUBKEY //p' | while read key; do if ! [[ ${keys[*]} =~ "$key" ]]; then sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys "$key"; keys+=("$key"); fi; done

So now you know how to perform a Fix APT NO_PUBKEY error. This will keep you up and running, and ensure that you don’t fall behind on your package updates.

For additional details check out Linux Uprisings article about fixing NO_PUBKEY errors.

If you like this post, you might also like my post about how to Recursively Count the number of folders in a directory.

Change the SNMP Log Level in Ubuntu

The default SNMP settings for a Ubuntu server can end up filling your syslog file with tons of unnecessary entries. This makes it virtually impossible to sift through for anything which is actually useful. So it can be very advantageous to change the SNMP log level in Ubuntu.

I have a cacti setup which I use to log and report on the details of many linux and windows servers. This tool is amazing, and really gives me some great information to diagnose issues. Or catch issues as they are progressing, but before they become urgent. Sometimes it is just easier to see something when your data is represented visually.

Cacti relies upon SNMP as the technology to grab data from the machines or devices that it is monitoring. SNMP is an industry standard, supported by all major operating systems and network enabled devices. But by default, at least in Ubuntu, the log level is set so high that every SNMP request that comes to the server is reported in your syslog file. Cacti polls lots of different SNMP records to build its graphs. Under those default settings it can leave dozens of entries in the syslog every 5 minutes. As you could imagine this can quickly fill up your log file and make it virtually unusable. Fortunately we just need to make a quick adjustment in order to change the SNMP log level in Ubuntu. Here is a quick example of some of the Syslog entries that I you may be receiving.

Jul 8 06:28:48 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:29:18 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:29:48 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:41028->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:48694->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:39372->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:54823->[Current Host IP]:161

Change the SNMP Log Level in Ubuntu

The change is just a quick flag in the /etc/default/snmpd file which changes how the system logs SNMP requests. The different log levels that are available are:

0 or ! for LOG_EMERG
1 or a for LOG_ALERT
2 or c for LOG_CRIT
3 or e for LOG_ERR
4 or w for LOG_WARNING
5 or n for LOG_NOTICE
6 or i for LOG_INFO
7 or d for LOG_DEBUG

By default a log level is not set so it is either dumping at the info or debug level. I prefer to switch it to level 3 (Error) which ensures that I still see any errors that come through. But doesn’t tell me every time a connection is made. This change can be made very easily. Basically you can just open up the /etc/default/snmpd file in your favorite editor and change the following line (Ubuntu 14.04 and 16.04).

SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'

To look like this:

SNMPDOPTS='-LS3d -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'

The only part that changed was the “-Lsd” flags that changed to be “-LS3d”. The default entry is a little different between 14.04/16.04, 18.04 and 20.04. But I have included a few single commands you can copy/paste into your terminal to make the change.

Copy/Paste Command Line Changes

For Ubuntu 14.04 and 16.04:

sed -i -- "s@SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

In Ubuntu 18.04:

sed -i -- "s@SNMPDOPTS='-Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

Finally Ubuntu 20.04:

sed -i -- "s@#SNMPDOPTS='-LSwd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

So there you go, now you can stop those annoying error log messages from filling up your syslog file. A big thanks to this ServerFault post on the subject for helping me figure it out.

Make a Full Disk Backup with DD

Recently I had a drive that was showing the early warning signs of failure. So I decided I had better make a backup copy of the drive. And then subsequently push that image onto another drive to avoid failure. Consequently I found that the drive was fine. It was the SATA cable that was failing. But the process helped remind me of what a useful tool dd is. Subsequently it refreshed my knowledge of how to use this remarkable tool. And finally helped remind me how to make a full disk backup with dd.

What is DD?

DD stands for “Data Definition”, it has been around since about 1974. It can be used to read write and convert data between filesystems, folders and other block level devices. As a result dd can be used effectively for copying the content of a partition, obtaining a fixed amount of random data from /dev/random, or performing a byte order transformation on data.

So Lets Make a Full Disk Backup with DD

I will start with the command I used to make a full disk backup with dd. And then give you a breakdown of the different command elements to help you understand what it is doing.

dd if=/dev/sdc conv=sync,noerror status=progress bs=64K | gzip -c > backup_image.img.gz

The command options break down like this:

if=/dev/sdc this defines the “input file” which in this case is the full drive “/dev/sdc”. You could do the same with a single partition like “/dev/sdc1”, but I want all the partitions on the drive stored in the same image.

conv=sync,noerror the “sync” part tells dd to pad each block with nulls, so that if there is an error and the full block cannot be read the original data will be preserved. The “noerror” portion prevents dd from stopping when an error is encountered. The “sync” and “noerror” options are almost always used together.

status=progress tells the command to regularly give an update on how much data has been copied. Without this option the command will still run but it won’t give any output until the command is complete. So making a backup of a very large drive could sit for hours before letting you know it is done. With this option a line like this is constantly updated to let you know how far along the process has gone.

1993998336 bytes (2.0 GB, 1.9 GiB) copied, 59.5038 s, 33.5 MB/s

bs=64K specifies that the “Block Size” of each chunk of data processed will be 64 Kilobytes. The block size can greatly affect the speed of the copy process. A larger block size will typically accelerate the copy process unless the block size is so large that it overwhelms the amount of RAM on your computer.

Making a compressed backup image file

At this point you could use the “of=/dev/sdb” option to output the contents directly to another drive /dev/sdb. But I opted to make an image file of the drive, and piping the dd output through gzip allowed me to compress the resulting image into a much smaller image file.

| gzip -c pipes the output of dd into the gzip command and writes the compressed data to stdout. Other options could be added here to change the compression ratio, but the default compression was sufficient for my needs.

> backup_image.img.gz redirects the output of the gzip command into the backup_image.img.gz file.

With that command complete I had copied my 115GB drive into a 585MB compressed image. Most of the drive had been empty space, but without the compression the image would have been 115GB. So this approach can make a lot of sense if you are planning on keeping the image around. If you are just copying from one drive to another then no compression is needed.

So there you have it, the process of making a full disk backup with dd. But I guess that is only half the story, so now I will share the command I used to restore that image file to another drive with dd.

Restoring a Full Drive Backup with DD

Fortunately the dd restore process is a bit more straightforward than the backup process. So without further adieu here is the command.

gunzip -c backup_image.img.gz | dd of=/dev/sdc status=progress

gunzip -c backup_image.img.gz right off the bat “gunzip” starts decompressing the file “backup_image.img.gz” and the “-c” sends the decompressed output to stdout.

| dd of=/dev/sdc pipes the output from gunzip into the dd command which is only specifying the “output file” of “/dev/sdc”.

status=progress again this option displays some useful stats about how the dd process is proceeding.

Once the has completed the transfer you should be good to go. But a couple caveats to remember. First the drive you restore to should be the same size or larger than the backup drive. Second, if the restore drive is larger, you will end up with empty space after the restore is complete. ie: 115GB image restored to a 200GB drive will result in the first 115GB of the drive being usable, and 85GB of free space at the end of the drive. So you may want to expand the restored partition(s) to fill up the extra space on the new drive with parted, or a similar tool. Lastly, if you use a smaller drive for the restore dd will not warn you that it won’t fit, it will just start copying and will fail when it runs out of space.

Conclusion

DD is an amazing tool that has been around for a while. And it continues to be relevant and useful each day. It can get you out of a bind and save your data, so give it a whirl and see what it can help you with today.

Here are a couple resources that I referenced to help me build my dd command. A guide on making a full metal backup with dd. And a general DD usage guide.