Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Reclaim some free space on your server 'with this weird trick' (KVM/Xen/dedi only)
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Reclaim some free space on your server 'with this weird trick' (KVM/Xen/dedi only)

rm_rm_ IPv6 Advocate, Veteran
edited September 2014 in Tutorials

Before:

# df -h /
Filesystem   Size  Used Avail Use% Mounted on
/dev/sda2    146G  871M  138G   1% /

Applying the setting (replace '/dev/sda2' with your actual partition name):

# tune2fs -m 0 /dev/sda2 
tune2fs 1.42.5 (29-Jul-2012)
Setting reserved blocks percentage to 0% (0 blocks)

After:

# df -h /
Filesystem   Size  Used Avail Use% Mounted on
/dev/sda2    146G  871M  146G   1% /

Instantly 8GB more!

What it's all about:

By default the Ext2/3/4 filesystems are created with a setting to set aside 5% of free space for the root user, not shown in the free space count and not usable by other users and their programs. This is done "just in case", with practically just one scenario in mind: if something goes wrong and your server consumes all its free disk space, the root user could still log in and check logs/crashdumps/etc and generally fix the situation.

Goes without saying that this is not something that happens all that often, so it seems unjustified to keep 5% of free space reserved at all times just for that. Especially considering that while on a 10GB filesystem 5% is just 512MB, on a 160GB disk it's 8GB, and on a 500GB one - around 25GB.

In case the described free space exhaustion scenario actually happens, you could likely use some sort of live rescue system offered by your provider instead to recover by deleting some files. Or if you want to still keep some of the "safety" space, you could set it to 1% instead of 5% by using "tune2fs -m 1".

This all is only for Ext2/3/4, other filesystems such as Btrfs do not have this quirk/feature.

Comments

  • when its time to squeeze thanks for the tip.

  • TheLinuxBugTheLinuxBug Member
    edited September 2014

    The one thing you are overlooking with this, that is important to note, when you use 100% of the available volume space a lot of tools will there after fail to work on the volume. You can not repair a volume in that state. If it is your / volume and you fill it, you will likely not be able to boot the server. Also, at the time when the server fills the / volume 100% it will likely cause bad things to occur on your server, such as data loss and possible kernel panic. Your only option if you allow the drive to fill 100% is to mount the volume using a rescue cd and delete data until there is enough space free to repair the volume and get the server to boot.

    While this sounds like a cool trick, BE VERY CAREFUL when using it on the / volume on a server or you could end up equally as unhappy at that time as you were happy when you found out this little trick. That 5% reserve is there for a reason. It is to keep novices from running into all of the above mentioned issues that can occur when the drive fills 100%.

    my 2 cents.

    Cheers!

  • @TheLinuxBug said:
    BE VERY CAREFUL

    +1, proceed with caution doing this.

  • 5% made sense when we had less than a TB on the drive, but now with 4TB drives coming out 5% is a little expensive (5% of 4TB = 200GB?).

  • NeoonNeoon Community Contributor, Veteran

    I just thought JESUS, and then.................... i have one 1GB Iceland VPS, so i thought i could double that Space just but..... dang it. I think its a bad idea to do that on a 1GB HDD VPS, next time when the HDD is full i would be fucked so not worth it.

  • To note, you can set the reserve lower, sure, I just would not set it to 0%. Setting it to 0% for reserve is asking for trouble if the drive fills all the way, as I mentioned.

    I am in no way against recovering some extra space, especially on larger volumes, just be super careful if your going to set it to 0%, especially on your / (root partition).

    Cheers!

    Thanked by 2geekalot Maounique
  • rm_rm_ IPv6 Advocate, Veteran
    edited September 2014

    TheLinuxBug said: The one thing you are overlooking with this, that is important to note, when you use 100% of the available volume space a lot of tools will there after fail to work on the volume. You can not repair a volume in that state.

    I don't think I am overlooking this, as I explained in the original message what was the purpose of this reserve (and hence what you may lose if you remove it).

    TheLinuxBug said: mount the volume using a rescue cd and delete data

    Mentioned as well.

    TheLinuxBug said: That 5% reserve is there for a reason.

    I really do not think it is. Not a one-size-fits-all 5% setting at any rate, as with 1-2TB disks this grows to downright ridiculous sizes of 50-100GB. But sure, if you keep "really important" stuff on a server, maybe you'd be better off tuning this to 1% instead of removing altogether. Personally I see no reason to leave any, and as shows I'm not alone, with no other modern filesystem implementing this antiquated reserve thing.

  • Mark_RMark_R Member
    edited September 2014

    Good to know! thanks for the tip @rm_

    but i'll wont apply it anytime soon, this space reserve thingy actually saved me once, I was torrenting and forgot to calculate the remaining diskspace! i'm so glad I still could access my server through ssh and simply delete the oversized files.

  • @rm_ said:
    Before:

    Just got 20GB back on my 500gb KVM, thanks xD

  • KuJoeKuJoe Member, Host Rep
    edited September 2014

    Just freed up 400GB on a single server by setting the reserved free space to 1%. Thanks! :)

    EDIT: Freed up 991GB across 10 servers by setting the reserve to 1%. The command can be run on other partitions besides /, I don't see why the /vz partition would need 5% free for root.

    Thanked by 1linuxthefish
  • perennateperennate Member, Host Rep
    edited September 2014

    The reserved space is primarily for performance reasons, not the just-in-case recovery reason that you mention. However it sounds like it's fine on ext4.

    http://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why

    Edit: this is also why it's a percentage and not a fixed amount

    Thanked by 1rm_
  • ATHKATHK Member
    edited September 2014

    This is done one any drive virtualized or not also slave drives too in a dedicated box, they hold an extra partition/ reserved which you can safely remove (slave only)..

  • I always set this to smaller value on OS installation for large HDD system, like to 1% on a 250GB HDD.

  • @msg7086 said:
    I always set this to smaller value on OS installation for large HDD system, like to 1% on a 250GB HDD.

    I usually leave it at the default unless I'm using a HUGE hard drive (1TB+) for an OS drive. Data drives get the reserved space set to 0.

  • If you have really big drive (1TB or more), it's even better to adjust inodes number and disable resize_inode like this:

    mkfs.ext4 -N 1000000 -O \^resize_inode /dev/sd…
    
  • @Infinity580 said:
    I just thought JESUS, and then.................... i have one 1GB Iceland VPS, so i thought i could double that Space just but..... dang it. I think its a bad idea to do that on a 1GB HDD VPS, next time when the HDD is full i would be fucked so not worth it.

    It wouldn't double it, it would free up ~50mb

  • @KuJoe said:
    Just freed up 400GB on a single server by setting the reserved free space to 1%. Thanks! :)

    EDIT: Freed up 991GB across 10 servers by setting the reserve to 1%. The command can be run on other partitions besides /, I don't see why the /vz partition would need 5% free for root.

    The 5% thing is a very very old traditional default from when drives were smaller, you can get away with sub-1% amount - you can also set reserved blocks by hand

    I actually use this a lot, since I typically deal with storage servers, and my strategy is to create small partitions for things like /, /var/log, and other normal system operational directories, and then I use the leftover space in one big partition for storing all the files that will be uploaded/downloaded to the server with reserved blocks set to 0, noexec set (so it's read/write only, and occassionally noatime too

    Thanked by 1rm_
  • elijahpaulelijahpaul Member
    edited September 2014

    Thank you for this. Used it on a data only partition, reclaimed 93GB!:

    Before:

    Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 1.7T 30G 99% /home

    Applied:

    [root@localhost ~]# tune2fs -m 0 /dev/sda3 tune2fs 1.42.9 (28-Dec-2013) Setting reserved blocks percentage to 0% (0 blocks)

    After:

    [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 1.7T 123G 94% /home

  • What mount point should I be using here? This is a KVM instance.

    root@107:~# df -h
    Filesystem                                              Size  Used Avail Use% Mounted on
    rootfs                                                  158G  653M  149G   1% /
    udev                                                     10M     0   10M   0% /dev
    tmpfs                                                    50M  172K   50M   1% /run
    /dev/disk/by-uuid/c885deb5-cfa4-4052-baae-6ad30d50697b  158G  653M  149G   1% /
    tmpfs                                                   5.0M     0  5.0M   0% /run/lock
    tmpfs                                                   100M     0  100M   0% /run/shm
    root@107:~#
  • I got 11gb more on my 240gb

  • TheLinuxBug said: Also, at the time when the server fills the / volume 100% it will likely cause bad things to occur on your server, such as data loss and possible kernel panic.

    Uh.. no.

    It won't be able to store more data, but it won't lose any, because it won't tell the software that rights were successful so it's not considered data loss, just lack of availability.

    And kernel panics? If you are running an init that crashes when it can't write to the storage medium (for what anyway? Things like logging shouldn't be done in init) you're system is unstable to begin with.

    The only difference is that your tab completion will break when you're logged in as root and the disk is 100% full. But everything will still function. You can still run find and rm just fine. Even if you keep the 5% limit, non-root logs won't be written. And if you don't monitor your disk space your last 5% will be eaten up as well so you're really just shifting the problem with the 5% reserve.

  • kcaj said: What mount point should I be using here? This is a KVM instance.

    on /

  • Tried that, wouldn't work. Thanks anyway.

  • rm_rm_ IPv6 Advocate, Veteran
    edited December 2014

    Take a look at

    ls -la /dev/disk/by-uuid/c885deb5-cfa4-4052-baae-6ad30d50697b

    Basically you can already use this link as the device name for tune2fs; but doing an ls on the symlink will tell you what is the actual device for the root FS.

    Thanked by 1J1021
  • "Hard drive manufacturers HATE him!"

  • @rm_ Thanks, that done it.

  • CableChief_JRCableChief_JR Member
    edited December 2014

    @mojojuju said:
    "Hard drive manufacturers HATE him!"

    LOL

Sign In or Register to comment.