reducing imaxpct on linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a 34TB xfs partition. First time we were run out of the space I
thought that the reason was that the fs was not mounted with inode64
option. So I unmounted the fs and mounted again with inode64 mount
option. (Before I did that I deleted some files because I needed a quick
solution, even if it was temporary.) I also moved the oldest files away
and back as the XFS FAQ suggests. A few days ago the "No space left on
device" message appeared again, but the free space shown by df is still
about 9TB and df -i shows that only 1% of inodes is uses. 

When the file system was created it was much smaller and it was created
with the default maxpct value which is 25%. Now that the size is 34TBs,
the 25% seems to be too big, we run out of the space. The actual inode
usage is about 1% so I decided to reduce maxpct to 5%.

I made a test on a 5GB fs, and it was successful with 'xfs_growfs -m
5 /dev/sdb1' but I'm still worrying about the result in the production
environment. Also, the production system is using LVM, the test was
using native disk.

What could happen if I reduce imaxpct? Is it safe or painful? 
What is really a chance that the 25% value is causing the error?

thanks very much,

Istvan




_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux