Change Your Hostname in CentOS 8

Changing your computer or servers hostname is an infrequent activity for most. But if you are like me periodically I will hastily provision a VM. And only realize after the provisioning is complete that I should have used a more descriptive hostname. Or to have chosen a hostname that fits in the theme of the other servers (Middle Earth, Stormlight Archive, Planets, etc…). But sometimes that process can be tedious and end up with you questioning if you got it right. Fortunately it is easy to change your hostname in CentOS 8.

The ever useful “hostnamectl” command makes this a simple process. If you execute the command with no options it will give you the current hostname as well as many details about the system.

[bdoga@host ~]$ hostnamectl
   Static hostname: host.bdoga.local
         Icon name: computer-vm
           Chassis: vm
        Machine ID: b1ce9c049f6d4a9589ad540ae9aa1c43
           Boot ID: 1906ec0120c246aa84bd407e46a237b6
    Virtualization: kvm
  Operating System: CentOS Linux 8 (Core)
       CPE OS Name: cpe:/o:centos:centos:8
            Kernel: Linux 4.18.0-147.8.1.el8.lve.1.x86_64
      Architecture: x86-64

Change Your Hostname in CentOS 8

As shown in the example above, this servers hostname is “host.bdoga.local”. But I am ready for a change, and want to start naming my servers with Stormlight Archive Names. One of my favorite characters is Kaladin, and I want to have this server on my full domain “bdoga.com”. So to change the domain name to “Kaladin.bdoga.com” I would issue the following command.

[bdoga@host ~]$ sudo hostnamectl set-hostname kaladin.bdoga.com

After issuing the command you will not see any sort of confirmation. You should just be greeted with an empty command prompt, but with your new hostname.

[bdoga@host ~]$ sudo hostnamectl set-hostname kaladin.bdoga.com
[bdoga@kaladin ~]$

And there you have it, you have changed your hostname in CentOS 8. This method should also work for Ubuntu 16.04+, Debian 8.0+, CentOS 7+, and other Systemd based systems.

To learn some more details about this and other tools for changing your hostname on Centos 8 please visit linuxize’s post.

And feel free to check out some more of our content regarding CentOS based systems. Or visit some of our posts that will help you increase your Command Line prowess.

Fix Apt NO_PUBKEY Error

If you have used Debian, Ubuntu, Mint or any other linux distribution that uses APT based package management system. You are sure to have run into the NO_PUBKEY error. It can be marginally frustrating but fortunately it can be easy to fix the apt NO_PUBKEY error and get your system back up and ready to roll.

What is the NO_PUBKEY error?

The APT NO_PUBKEY error shows up when the public/private key pair has changed for one of your APT repositories. When this happens, if your local system or server does not have the correct public key, then it cannot verify the repository. And therefore you get the error. This process is in place to ensure you don’t accidentally download packages from an unknown APT source.

Fix the NO_PUBKEY error

There is a simple command that you can run to download the missing public key from one of the APT key servers. You will just need to replace the portion of the command that says “THE_MISSING_KEY_HERE” with the key that is reported in the error.

sudo apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys THE_MISSING_KEY_HERE

So if you receive the following error

W: Failed to fetch http://ppa.launchpad.net/myrepository/apps/ubuntu/dists/bionic/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY EA8CACC073C3DB2A

you would run the following command to get the working public key for the apt repository.

sudo apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys EA8CACC073C3DB2A

After the key has been updated you can then run your “apt update” and it should complete successfully.

Fix Multiple Keys with One Command

The following command can be used to fix multiple NO_PUBKEY errors with one command. Or can be used to fix a single NO_PUBKEY error without having to edit the command. It might be overkill but will still get the job done.

sudo apt update 2>&1 1>/dev/null | sed -ne 's/.*NO_PUBKEY //p' | while read key; do if ! [[ ${keys[*]} =~ "$key" ]]; then sudo apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys "$key"; keys+=("$key"); fi; done

So now you know how to perform a Fix APT NO_PUBKEY error. This will keep you up and running, and ensure that you don’t fall behind on your package updates.

For additional details check out Linux Uprisings article about fixing NO_PUBKEY errors.

If you like this post, you might also like my post about how to Recursively Count the number of folders in a directory.

Change the SNMP Log Level in Ubuntu

The default SNMP settings for a Ubuntu server can end up filling your syslog file with tons of unnecessary entries. This makes it virtually impossible to sift through for anything which is actually useful. So it can be very advantageous to change the SNMP log level in Ubuntu.

I have a cacti setup which I use to log and report on the details of many linux and windows servers. This tool is amazing, and really gives me some great information to diagnose issues. Or catch issues as they are progressing, but before they become urgent. Sometimes it is just easier to see something when your data is represented visually.

Cacti relies upon SNMP as the technology to grab data from the machines or devices that it is monitoring. SNMP is an industry standard, supported by all major operating systems and network enabled devices. But by default, at least in Ubuntu, the log level is set so high that every SNMP request that comes to the server is reported in your syslog file. Cacti polls lots of different SNMP records to build its graphs. Under those default settings it can leave dozens of entries in the syslog every 5 minutes. As you could imagine this can quickly fill up your log file and make it virtually unusable. Fortunately we just need to make a quick adjustment in order to change the SNMP log level in Ubuntu. Here is a quick example of some of the Syslog entries that I you may be receiving.

Jul 8 06:28:48 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:29:18 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:29:48 server snmpd[7885]: error on subcontainer 'ia_addr' insert (-1)
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:41028->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:48694->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:39372->[Current Host IP]:161
Jul 8 06:30:02 server snmpd[7885]: Connection from UDP: [Originating IP]:54823->[Current Host IP]:161

Change the SNMP Log Level in Ubuntu

The change is just a quick flag in the /etc/default/snmpd file which changes how the system logs SNMP requests. The different log levels that are available are:

0 or ! for LOG_EMERG
1 or a for LOG_ALERT
2 or c for LOG_CRIT
3 or e for LOG_ERR
4 or w for LOG_WARNING
5 or n for LOG_NOTICE
6 or i for LOG_INFO
7 or d for LOG_DEBUG

By default a log level is not set so it is either dumping at the info or debug level. I prefer to switch it to level 3 (Error) which ensures that I still see any errors that come through. But doesn’t tell me every time a connection is made. This change can be made very easily. Basically you can just open up the /etc/default/snmpd file in your favorite editor and change the following line (Ubuntu 14.04 and 16.04).

SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'

To look like this:

SNMPDOPTS='-LS3d -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'

The only part that changed was the “-Lsd” flags that changed to be “-LS3d”. The default entry is a little different between 14.04/16.04, 18.04 and 20.04. But I have included a few single commands you can copy/paste into your terminal to make the change.

Copy/Paste Command Line Changes

For Ubuntu 14.04 and 16.04:

sed -i -- "s@SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u snmp -g snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

In Ubuntu 18.04:

sed -i -- "s@SNMPDOPTS='-Lsd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

Finally Ubuntu 20.04:

sed -i -- "s@#SNMPDOPTS='-LSwd -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@SNMPDOPTS='-LS3d -Lf /dev/null -u Debian-snmp -g Debian-snmp -I -smux,mteTrigger,mteTriggerConf -p /run/snmpd.pid'@g" /etc/default/snmpd
service snmpd restart

So there you go, now you can stop those annoying error log messages from filling up your syslog file. A big thanks to this ServerFault post on the subject for helping me figure it out.

Make a Full Disk Backup with DD

Recently I had a drive that was showing the early warning signs of failure. So I decided I had better make a backup copy of the drive. And then subsequently push that image onto another drive to avoid failure. Consequently I found that the drive was fine. It was the SATA cable that was failing. But the process helped remind me of what a useful tool dd is. Subsequently it refreshed my knowledge of how to use this remarkable tool. And finally helped remind me how to make a full disk backup with dd.

What is DD?

DD stands for “Data Definition”, it has been around since about 1974. It can be used to read write and convert data between filesystems, folders and other block level devices. As a result dd can be used effectively for copying the content of a partition, obtaining a fixed amount of random data from /dev/random, or performing a byte order transformation on data.

So Lets Make a Full Disk Backup with DD

I will start with the command I used to make a full disk backup with dd. And then give you a breakdown of the different command elements to help you understand what it is doing.

dd if=/dev/sdc conv=sync,noerror status=progress bs=64K | gzip -c > backup_image.img.gz

The command options break down like this:

if=/dev/sdc this defines the “input file” which in this case is the full drive “/dev/sdc”. You could do the same with a single partition like “/dev/sdc1”, but I want all the partitions on the drive stored in the same image.

conv=sync,noerror the “sync” part tells dd to pad each block with nulls, so that if there is an error and the full block cannot be read the original data will be preserved. The “noerror” portion prevents dd from stopping when an error is encountered. The “sync” and “noerror” options are almost always used together.

status=progress tells the command to regularly give an update on how much data has been copied. Without this option the command will still run but it won’t give any output until the command is complete. So making a backup of a very large drive could sit for hours before letting you know it is done. With this option a line like this is constantly updated to let you know how far along the process has gone.

1993998336 bytes (2.0 GB, 1.9 GiB) copied, 59.5038 s, 33.5 MB/s

bs=64K specifies that the “Block Size” of each chunk of data processed will be 64 Kilobytes. The block size can greatly affect the speed of the copy process. A larger block size will typically accelerate the copy process unless the block size is so large that it overwhelms the amount of RAM on your computer.

Making a compressed backup image file

At this point you could use the “of=/dev/sdb” option to output the contents directly to another drive /dev/sdb. But I opted to make an image file of the drive, and piping the dd output through gzip allowed me to compress the resulting image into a much smaller image file.

| gzip -c pipes the output of dd into the gzip command and writes the compressed data to stdout. Other options could be added here to change the compression ratio, but the default compression was sufficient for my needs.

> backup_image.img.gz redirects the output of the gzip command into the backup_image.img.gz file.

With that command complete I had copied my 115GB drive into a 585MB compressed image. Most of the drive had been empty space, but without the compression the image would have been 115GB. So this approach can make a lot of sense if you are planning on keeping the image around. If you are just copying from one drive to another then no compression is needed.

So there you have it, the process of making a full disk backup with dd. But I guess that is only half the story, so now I will share the command I used to restore that image file to another drive with dd.

Restoring a Full Drive Backup with DD

Fortunately the dd restore process is a bit more straightforward than the backup process. So without further adieu here is the command.

gunzip -c backup_image.img.gz | dd of=/dev/sdc status=progress

gunzip -c backup_image.img.gz right off the bat “gunzip” starts decompressing the file “backup_image.img.gz” and the “-c” sends the decompressed output to stdout.

| dd of=/dev/sdc pipes the output from gunzip into the dd command which is only specifying the “output file” of “/dev/sdc”.

status=progress again this option displays some useful stats about how the dd process is proceeding.

Once the has completed the transfer you should be good to go. But a couple caveats to remember. First the drive you restore to should be the same size or larger than the backup drive. Second, if the restore drive is larger, you will end up with empty space after the restore is complete. ie: 115GB image restored to a 200GB drive will result in the first 115GB of the drive being usable, and 85GB of free space at the end of the drive. So you may want to expand the restored partition(s) to fill up the extra space on the new drive with parted, or a similar tool. Lastly, if you use a smaller drive for the restore dd will not warn you that it won’t fit, it will just start copying and will fail when it runs out of space.

Conclusion

DD is an amazing tool that has been around for a while. And it continues to be relevant and useful each day. It can get you out of a bind and save your data, so give it a whirl and see what it can help you with today.

Here are a couple resources that I referenced to help me build my dd command. A guide on making a full metal backup with dd. And a general DD usage guide.

How To Compress Mysqldump Output

if you read my previous writeup on dumping all mysql databases you will recognize some of this information. I wanted to pay some specific attention to some of the different methods for how to compress mysqldump output.

Obviously compressing your mysql databased exports can have some major benefits. The biggest benefit is the smallness of the file size. Mysql databases and really all databases have the tendency to grow to large sizes. Even small websites can quickly find hundreds of megabytes worth of data in their database. Storing large database export files in your backup can eat up disk space pretty rapidly. Compressing your mysql output can reduce the size of your export file by seven or more times.

If you need to keep individual database backups then compression really makes sense. But if you are using something like rdiff-backup then it makes more sense to skip the compression. Rdiff-backup is unable to do a diff on the compressed data, so it won’t save the space you expect.

Basic Mysqldump Compression Commands

Here are a couple different variations of mysqldump piped compression commands which we will breakdown.

1: mysqldump -u dbUser -p DBName > OutputFile.sql
2: mysqldump -u dbUser -p DBName | gzip > OutputFile.sql.gz
3: mysqldump -u dbUser -p DBName | gzip -9 > OutputFile.sql.gz
4: mysqldump -u dbUser -p DBName | zip > OutputFile.sql.zip
5: mysqldump -u dbUser -p DBName | bzip2 > OutputFile.sql.bz2

In these examples we see the same database being exported in each command. But there are a couple differences, in #1 we are employing no compression. Command #2 is using gzip with its default settings. Then command #3 is utilizing gzip with maximum compression. And finally command #4 is using zip to perform its compression.

Compression Commands Comparison

Testing the commands above on the same database and on the same hardware yielded the following results.

CommandFilesizeOutput Time
#1391MB13.827s
#257MB16.122s
#355MB32.357s
#457MB16.169s
#544MB1m 18.701s
Output Mysql Database command results

The table above shows the effectiveness of each compression method on the same dataset. The first command sets the baseline for data export with no compression. Gzip applies basic compression and gives a significant size reduction with a very small speed hit. It comes in just a hair faster than zip with about the same compression results.

Adding the -9 to the Gzip command in #3 doubles the output time, and only provides 2MB of space savings. But then Bzip2 weighs in on command #5 taking an extra minute over Gzip or Zip. That extra minute was required to pack the file small enough to rescue another 13MB of space.

Compress Mysqldump Output Conclusions

If you can compress your database output, then you will see significant space savings in your backup storage. Even if backup speed is essential, gzip or zip offer a major reduction in size for minimal extra time. And if time is not a major issue then going with bzip2 will give you much larger space savings in exchange.

Understanding and utilizing compression as part of your backup methodology is an essential element for storage success. Proper implementation can ensure that you save the needed space and reduce backup transfer time. Especially in the event that you need to transfer your backup over a slow network connection. Compression will come to your aid and save the day. So don’t hesitate to compress mysqldump output, it might be just what the doctor ordered.

Further Reading

For additional details and info check out this post which talks more about Compressing Mysqldump Output

Dump All MySQL Databases into Individual SQL Files

Anyone who is responsible for managing a MySQL database will eventually run into this problem. You either need to dump all your MySQL databases for a backup, or to prepare for an upgrade. Whatever your circumstances are there are several different methods that can be employed to dump your MySQL DBs. Hopefully I can give you the basic tools to get you on your way.

Dump All DBs into a Single SQL File

This first method is what I would call the quick and dirty method. It is straight forward and just dumps all of the databases on a server into a single SQL file. For many individuals this is sufficient, but may not be a good option for larger databases or backup/restore processes. Since all of the DBs are bundled into a single file. Each of the values in brackets “[]” are placeholders for your own values.

mysqldump -u [username] -p --all-databases > allDB.sql

If all you need is to get a quick backup of everything this may be your ticket. The “–all-databases” flag does the magic here, dumping all of the MySQL databases into a single SQL file. More details on this method can be found here. But if you are wanting to be able to easily restore an individual database you may want to use an approach like this.

Dump All MySQL Databases into Individual Files

for I in $(mysql -u [username] -p[mypassword] -h [Hostname/IP] -e 'show databases' -s --skip-column-names); do mysqldump -u [username] -p[mypassword] -h [Hostname/IP] $I > "/home/user/$I.sql"; done

This command first calls “mysql” and gets a list of databases on the server. The command feeds that list of MySQL databases into “mysqldump” to get an individual SQL file for each database on the server. Finally those SQL files are then saved in the location indicated, “/home/user/dbname.sql” in this example.

A few things to note in this command are that first you will notice that I have included the password in the command. The command will complain that using the password on the command line is not safe. However it is required for the command to work. The “-p” is immediately followed by the password, without a space. This is how the option functions, if you add a space it will not work.

The “$I” is a variable and it will have all the database names in it, as it iterates through your listing of DBs. So as you modify the command to fit your specific setup, just make sure to keep that for consistency.

Additional Mysqldump Options

There are a couple of additional mysqldump options that you may want to add depending on your requirements.

--single-transaction

This option keeps mysqldump from trying to get a complete lock on the database. It tells mysqldump to just grab a single transaction and dump the DB contents at the time of that transaction. This can be especially helpful when you are dumping a DB that is especially large, or on a very busy server. Without this option mysqldump may timeout while waiting to get a lock on the DB.

--default-character-set=utf8mb4 

If you are working with data that may contain emoticons you will want this flag. This ensures that your dumped sql file has the correct information to recreate the emoticons when the file gets reimported. Without this option, your emoticons will show up as strange/random text characters when you restore your backup.

Adding Compression

I don’t typically use compression on my SQL dumps since I like to use rdiff-backup as my backup mechanism. But for those who would like to compress their mysqldump in one step here is the basic gist of it.

mysqldump -u [username] -p --all-databases | gzip > allDB.sql.gz

You just pipe the output from the mysqldump command into gzip, or bzip2 to compress the contents. That can be easily added to the command above to dump all MySQL databases into individual files like so.

for I in $(mysql -u [username] -p[mypassword] -h [Hostname/IP] -e 'show databases' -s --skip-column-names); do mysqldump -u [username] -p[mypassword] -h [Hostname/IP] $I | gzip > "/home/user/$I.sql.gz"; done

Hopefully these examples help you get the backups you need. Have some fun along the way.

How to Use Rdiff-backup – A Simply Powerful Backup Tool

In this post I hope to help you understand how to use Rdiff-backup. I stumbled across Rdiff-backup several years ago. It has helped me streamline and simplify most of my file based backup processes from Linux servers. As the name implies, a diff is performed on the files being backed up, so only differences get backed up. As you can probably guess only storing the diffs of changes can lead to a much smaller backup.

Some advantages of Rdiff-backup is that it utilizes rsync, so it can quickly and easily mirror a directory. Backups can happen in seconds if there have been few changes. Your data all travels over a secure ssh connection, so your files are safe during transport. And being a simple command line tool, you can easily script out a backup scenario. Or just use it straight from your terminal.

How I Use Rdiff-backup

I typically use Rdiff-backup with my web hosting clients, it allows for quick backup and restore of their web files. And because most of the the files don’t change from day to day the backup is lightning fast.

rdiff-backup --exclude '**cache/' --exclude '**debug.log' /var/www user@ipAddress::/home/user/backup
rdiff-backup --remove-older-than 52W user@ipAddress::/home/user/backup

Let’s break down the command(s), the first Rdiff-backup command has two “–exclude” options. Those options will as indicated exclude the referenced files or directories from the backup. The exclude options can either have a full relative directory structure or in this case a “**” will match any path. A single asterisk “*” could also be used, but it matches any part of a path not containing a “/” in it.

The “/var/www” part of the command is the directory to backup. Using standard ssh authentication methods “user@ipAddress” ie:”bob@10.1.1.1″ for the login credentials. And then a double colon, which is different from normal rsync/scp formatting. The “::” proceeds the backup destination directory.

So to explain in basic terms, the first command will backup everything under the “/var/www” directory, except any directory ending in “cache/” or file ending in “debug.log”. The backup will be made in the “/home/user/backup” directory.

Backup Lifecycle

Now that you have a backup created how do you manage how long the backup will be kept? Without a command like the second one above, the Rdiff-backup will be kept indefinitely. But adding the second command helps us manage the how long to keep the backups.

rdiff-backup --remove-older-than 52W user@ipAddress::/home/user/backup

This command is a little simpler than the first, it doesn’t specify a source directory. But rather only specifies how long to keep files. The “–remove-older-than” option can take a number of different options. I like to keep my backups for a year, so “52W” gives me 52 weeks of backups. Any existing diffs older than the specified time are removed. Other options are s, m, h, D, W, M, or Y (indicating seconds, minutes, hours, days, weeks, months, or years respectively). Additionally a “B” can be used to indicate the number of backups, ie: “3B” would keep the last three backups.

After specifying the number or timeframe of backups to keep, the only other thing to specify is the backup location. The trimming of backup content is then performed on the given location.

Performing a Restore

So now you have your content backed up, and you need to restore something. A backup is only as good as the data you can restore out of it right? Fortunately the restore process fairly simple as well.

rdiff-backup -r 5D user@ipAddress::/home/user/backup/example.com/index.php /home/localUser/www/restore/

The “-r” option tells Rdiff-backup to restore files, and uses the same time format as the delete option above. In this case we are restoring a version of the file “example.com/index.php” from 5 days ago. The file is being restored/copied to “/home/localUser/www/restore/”. The same can be done for an entire directory structure.

rdiff-backup -r 3B user@ipAddress::/home/user/backup/example.com/ /home/localUser/www/restore/

This command restores/copies all the contents of the example.com directory to “/home/localUser/www/restore/” from 3 Backups ago. Or if you are hunting for a specific day you can always do something like this.

rdiff-backup -r 03-05-2020 user@ipAddress::/home/user/backup/example.com/ /home/localUser/www/restore/

That will perform the same restore, but specifically as of the 5th of March 2020. The date used can be “03/05/2020” or “2020-03-05”, and all indicate midnight as of that day.

For a full rundown on all the options and other details for Rdiff-backup check out the project documentation.

There you have a basic rundown on how to use Rdiff-backup. I find it a very useful and powerful tool, and hope it will help you keep your backups running.

Find Files Not Owned By Specific User or Group

Recently ran into an issue where I needed to search recursively through a file structure. And find files that were not owned by a specific user or group. The issue came up because a client would run a recursive chown on a directory before running git. As a result of the chown they would periodically see the process hang. The chown was to ensure that all the files were writable and wouldn’t gum up the git process. However the hung chown command would cause extra load on the server and lead to system instability.

These files were shared between a few web servers on an NFS share. After some lengthy research I am still stumped as to exactly what the source of the issue is. But it lead me to craft a command that would allow the client to easily check permissions on their files. But without forcing a change to the ownership, which seemed to possibly be the cause of the hanging process.

Find files not owned by specific user

The command was created using some help from this site and this site.

find . ! -group web -or ! -user web -printf "%p - user:%u group:%g\n"

The find command searches from the “.” current directory and recursively checks all files and folders. The “! -group web” section tells the find command to check for files that are NOT “!” owned by the “web” group. Then “! -user web” specifies that it should check for files that are NOT “!” owned by the “web” user. The “-or” tells the find command to match files that are either not owned by the “web” user or group. Changing that to “-and” would only show files that are not owned by the “web” user or group.

The final section ‘-printf “%p – user:%u group:%g\n”‘ tells find how to output the results. “%p” outputs the filename and relative directory structure. “%u” and “%g” output the user and group of the file that is found. Some sample output would look like this.

./logs/access.log.2016_09_30.03.gz - user:root group:web
./logs/access.log.2016_10_04.09.gz - user:root group:web
./logs/access.log.2016_10_06.32.gz - user:root group:web
./logs/access.log.2016_10_04.38.gz - user:root group:web

This command easily helps you determine if your permissions are set correctly. And identify which files will need to have their ownership changed. It is more lightweight and doesn’t force any changes to the filesystem when they are unneeded.

How to Fix: “warning: device is not properly aligned” with Parted

Because you are on this page I assume you have received the same warning I did. “warning: device is not properly aligned”. I allocated a new 1 TB iSCSI drive and I attached it to the initiating server. After the iSCSI device was connected I partitioned it to use 100% of the disk. Then I attempted to format the partition with the XFS file system using the following command.

mkfs.xfs /dev/sdb1 

I immediately got the following response:

warning: device is not properly aligned /dev/sdb1
Use -f to force usage of a misaligned device 

Finding the fix for “device is not properly aligned”

So I did some digging and came across this article about how to properly align the partition. I followed the post to grab the following values from the device.

[root@server ~]# cat /sys/block/sdb/queue/optimal_io_size
1048576
[root@server ~]# cat /sys/block/sdb/queue/minimum_io_size
131072
[root@server ~]# cat /sys/block/sdb/alignment_offset
0
[root@server ~]# cat /sys/block/sdb/queue/physical_block_size
131072 

Following the formula that was outlined (optimal_io_size + alignment_offset) / physical_block_size. And got the following results: (1048576 + 0) / 131072 = 8. I plugged that information into parted, which resulted in mixed results.

(parted) mklabel gpt                                                      
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes 
                                                              
(parted) mkpart primary 8s 100%
Warning: You requested a partition from 4096B to 1074GB.                  
The closest location we can manage is 17.4kB to 1074GB.
Is this still acceptable to you?
Yes/No? Yes     
                                                          
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore 
                                                    
(parted) align-check optimal 1                                            
1 not aligned

Making Sense of the Parted Alignment Response

So my drive was still not properly aligned, but I had some better info which would help me fix the problem. So let’s break down the Parted commands

(parted) mklabel gpt 

Destroyed the existing partition table and cleared things out to start fresh.

(parted) mkpart primary 8s 100%

This command told Parted to create a new primary partition starting at sector 8 and use the rest of the drive. The response that came from this command gave me the clue that I needed to fix the alignment.

Warning: You requested a partition from 4096B to 1074GB.
The closest location we can manage is 17.4kB to 1074GB.

The warning output indicated that 8 sectors of 512B would only come to 4096B or 4kB. But Parted could only adjust the alignment to 17.4kB, so this was still out of alignment as the rest of the output indicated. That gave me the clue I needed to fix it though.

Alignment Fixed

Knowing now that Parted could only get within 17.4kB of the start of my drive I adjusted the sectors to match that. Since 8 sectors was 4096B, I adjusted it to be an even number of sectors beyond the 17.4kB point. Choosing 32768kB, or 64 sectors. This change yielded the following results.

(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes                                                               

(parted) mkpart primary 64s 100%

(parted) align-check optimal 1
1 aligned

(parted) quit                                                             
Information: You may need to update /etc/fstab.

The partitioning was successful, using the 64 sector starting point pushed it beyond the 17.4kB point. Then it was properly aligned. I formatted the partition with XFS and all went swimmingly.

[root@server ~]# mkfs.xfs /dev/sdb1
 specified blocksize 4096 is less than device physical sector size 131072
 switching to logical sector size 512
 meta-data=/dev/sdb1              isize=256    agcount=32, agsize=8191872 blks
          =                       sectsz=512   attr=2, projid32bit=0
 data     =                       bsize=4096   blocks=262139648, imaxpct=25
          =                       sunit=32     swidth=256 blks
 naming   =version 2              bsize=4096   ascii-ci=0
 log      =internal log           bsize=4096   blocks=128000, version=2
          =                       sectsz=512   sunit=32 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0

Now the device is formatted, mounted, and working like a champ. Who would have known that a simple offset of 32kB could make such a problem.

If you are working with iSCSI then you may find you need to force disconnect and reconnect iSCSI on from time to time. Check out my post on how I handle that.

Rename a file that starts with a hyphen/dash

While working on your *nix based system you may run into this issue from time to time. The file may have a hyphen (-) at the start of the filename because of user error. Or it may have been added by a malicious bit of software. However the hyphen got there it can be a pain to deal with. In this post we will answer how to rename a file that starts with a hyphen.

The first time I encountered a filename like this was while searching the files of a compromised website. The offending malware had created several different new files on the site. And one had been created with a starting hyphen in the filename. This causes issues when you run ‘rm’ to remove the file.

rm -filename.php

The hyphen typically tells a command line utility to anticipate an execution option. So in this case the ‘rm’ command attempts to interpret -filename.php as an option. This would just cause ‘rm’ to return the following on a linux based machine.

rm: invalid option -- 'l'
Try `rm --help' for more information.

Both -f and -i are options for the command so when it gets to ‘l’ it assumes it is just an invalid option. So this becomes an issue when you try to remove or otherwise edit/manipulate a file with a hyphen in the name.

Dealing with the hyphenated filename

Fortunately there is a relatively easy method to deal with a file that starts with a hyphen/dash. Adding a double dash (–) before the filename will fix it.

 rm -- -filename.php

This behavior should be universal for GNU/Linux commands. The double dash/double hyphen tells the command that no more command line options will be given. As a result your favorite command will ignore any further dashes and allow you to mv/cp/rm the file.

So now you know how to rename a file that starts with a hyphen on the command line. Hopefully that makes your life a little better.

I originally learned of this technique from this helpful post on superuser.com.

Want to learn how to recursively delete specific files, check out this post to find out how.