What to do when you have plenty of available disk space but the system is telling you the disk is full?! I was working on a server migration, moving 94GB of user files from the old server to the new server. Since we aren’t planning on seeing a lot of growth on the new server, I provisioned a 100GB partition for the user files. A perfect plan, right?… So I thought. After rsync’ing the user files, the new server was showing 100% disk usage:
Filesystem* Size Used Avail Use% Mounted on*
99G 94G 105M 100% /user_dir
Given competing tasks, at first glance I only saw the 100%. Naturally I assumed something went wrong with my rsync or I forgot to clear the target partition. So I deleted everything from the target partition and rsyn’d again. When the result was the same, it gave my brain pause to say…what?!
My first thought was that the block size was different for the two servers the old server block size was 4kB, perhaps the new server had a larger block size. As we joked, to much air in the files! Turns out, using the following commands, the block size was the same on both systems:
blockdev --getbsz partition
# blockdev --getbsz /dev/mapper/my_lv_name
So the block size of the file system on both servers is 4kB.
I started digging through the man pages of tune2fs and dumpe2fs (and google) to see if I could figure out what was consuming the disk space. Perhaps there was a defunct process that was holding the blocks )like from a deletion), there wasn’t. In my research I found the root cause. New ext2/3/4 partitions set a 5% reserve for file system performance and to insure available space for “important” root processes. Not a bad idea for the root and var partitions but this approach doesn’t make sense in most other use cases, in this case user data.
Using the tune2fs command we can see what the “Reserved block count” like this:
tune2fs -l /dev/mapper/vg_name-LogVol00
The specific lines we are interested in are:
Block count: 52165632
Reserved block count: 2608282
These lines show that there is a 5% reserve on the disk/Logical Volume. We fix this with this command:
tune2fs -m 1 /dev/mapper/vg_name-LogVol00
This reduces the reserve to 1%. The resulting Reserved block count reflects this 1%
Block count: 52165632
Reserved block count: 521107
While this situation is fairly unique, hopefully this will at the least answer your questions and help you better understand the systems you manage.
*The names in the above have been changed to protect the innocent.