eric johnson wrote:
Suppose I'm on a GFS partition with 50 GBs free.
And then I rudely drain it of inodes with a perl script that looks
like this.
Executive summary - make a bunch of randomly named files...
--makefiles.pl
my $i=0;
my $max=shift(@ARGV);
my $d=shift(@ARGV);
if (not defined $d) {
$d="";
}
foreach(my $i=0;$i<$max;$i++) {
my $filename=sprintf("%s-%d%s",rand()*100000,$i,$d);
open FOO, ">$filename";
print FOO "This is fun!!\n";
close FOO;
}
In fact, to be extra cruel, I make a bunch of subdirectories on my
partition, and then run an instance of this script in each
subdirectory until the box is saturated with work to do and everyone
is busily draining away the inodes. This takes a good 6 hours.
Then when the inodes are drained, I kill the run away scripts and then
delete all the files that were created.
While everything has been deleted, I still don't get back all of my
space. In fact, the vast majority of it still appears to be chewed up
by GFS and it won't give back the space. So when I copy large files
to it I get errors like...
cp: writing `./src.tar.gz': No space left on device
When that file is clearly under a gig and should fit.
But if I run a gfs_reclaim on that partition, then it all magically
comes back and then I can place largish files back on the disk.
Is this a well known characterstic of GFS that I somehow missed
reading about?
hmmm ... look like a bug - the inode de-allocation code is supposed to
put the meta-data blocks back to free list. Thank you for let us know.
Will open a bugzilla to fix this up.
-- Wendy
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster