LVM

6GB free = 100% disk usage?!

What to do when you have plenty of available disk space but the system is telling you the disk is full?!  I was working on a server migration, moving 94GB of user files from the old server to the new server.  Since we aren’t planning on seeing a lot of growth on the new server, I provisioned a 100GB partition for the user files.  A perfect plan, right?…  So I thought.  After rsync’ing the user files, the new server was showing 100% disk usage:

Filesystem*            Size  Used Avail Use% Mounted on*
/dev/mapper/my_lv_name
                       99G   94G  105M 100% /user_dir

Given competing tasks, at first glance I only saw the 100%.  Naturally I assumed something went wrong with my rsync or I forgot to clear the target partition.  So I deleted everything from the target partition and rsyn’d again.  When the result was the same, it gave my brain pause to say…what?!

My first thought was that the block size was different for the two servers the old server block size was 4kB, perhaps the new server had a larger block size.  As we joked, to much air in the files!  Turns out, using the following commands, the block size was the same on both systems:

usage:
blockdev --getbsz partition
# blockdev --getbsz /dev/mapper/my_lv_name 
4096

So the block size of the file system on both servers is 4kB.

I started digging through the man pages of tune2fs and dumpe2fs (and google) to see if I could figure out what was consuming the disk space.  Perhaps there was a defunct process that was holding the blocks )like from a deletion), there wasn’t.  In my research I found the root cause.  New ext2/3/4 partitions set a 5% reserve for file system performance and to insure available space for “important” root processes.  Not a bad idea for the root and var partitions but this approach doesn’t make sense in most other use cases, in this case user data.

Using the tune2fs command we can see what the “Reserved block count” like this:

tune2fs -l /dev/mapper/vg_name-LogVol00

The specific lines we are interested in are:

Block count:              52165632
Reserved block count:     2608282

These lines show that there is a 5% reserve on the disk/Logical Volume.  We fix this with this command:

tune2fs -m 1 /dev/mapper/vg_name-LogVol00

This reduces the reserve to 1%.  The resulting Reserved block count reflects this 1%

Block count:              52165632
Reserved block count:     521107

While this situation is fairly unique, hopefully this will at the least answer your questions and help you better understand the systems you manage.

*The names in the above have been changed to protect the innocent.

Changing the Volume Group Name

One of the problems with cloning a system is that it has the same volume group names as the server it was cloned from.  Not a huge problem but it can limit your ability to leverage the volume group.  The fix appears easy but there is a gotcha.

RedHat provides a nice utility: vgrename

If you use that command and think you are done, you will be sorely mistaken if your root file system is on a volume group!  I’m speaking from experience there, so listen up!

If you issue the vgrename command:

# vgrename OldVG_Name NewVG_Name

it works like a charm.  If you happen to reboot your system at this point you are in big trouble… The system will Kernel Panic.

If you updated/changed the name of the VG that contains the root file system you need to modify the following two files to reflect the NewVG_Name.

  1. In /etc/fstab.   This one is obvious and I usually remember.
  2. In /etc/grub.conf.  Otherwise the kernel tries to mount the root file-system using the old volume group name.

The change is easy using ‘vi’.  Open the file in vi, then use sed from within vi; for example:

vi /etc/fstab
:%s/OldVG_Name/NewVG_Name/g
:wq

Don’t forget to save the changes.