Hi,
I have a problem using GFS 6.1 in a 4-node cluster.
My scenario is as follows:
4 nodes share a 100GB SAN device.
node 1 generates data, while nodes 2-4 only read that data (although the
gfs is mounted rw).
The amount of shared data is ~ 3GB.
node 1 creates new data and moves the old data. After the mv, the old
directory is removed.
nodes 2-4 notice that the data has changed and restart.
However, I still wonder why the deleted inodes are never de-allocated.
Without manual intervention (eg. "service gfs restart") on one node 1,
the filesystem grows about 3GB/day although the actual data is still only
~3GB.
So, as a workaround, I constantly restart the GFS from time to time on
node 1 and the inodes are de-allocated:
Jul 18 14:04:07 node1 kernel: GFS: fsid=cluster:lucene.0: Scanning for
log elements...
Jul 18 14:04:07 node1 kernel: GFS: fsid=cluster:lucene.0: Found 92
unlinked inodes
Jul 18 14:04:07 node1 kernel: GFS: fsid=cluster:lucene.0: Found quota
changes for 0 IDs
Jul 18 14:04:07 node1 kernel: GFS: fsid=cluster:lucene.0: Done
- I tried various parameters using gfs_tool, however, the deleted inodes
never get removed unless I restart the whole GFS on node 1 - is there any
way to circumvent this issue ?
inoded_secs is at 15 and neither gfs_tool reclaim nor gfs_tool shrink show
any inodes reclaimed.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster