gfs 5.1 question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, a question for the old-timers:

gfs 5.1
GULM
RH 7.3, custom kernel
~350GB FC array.
Eg. ancient stuff - can't change it. migrating to new gfs cluster I just
built soon.

df shows 84% of volume used, but writes say out of space.

What tests can I do to determine why this is, and better how can I
eliminate this unwritable 16%? Do I need to 'reclaim' inodes or
something?


Thanks,
-C

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux