Re: GS2 try_rgrp_unlink consuming lots of CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Mon, 2009-10-26 at 07:34 -0700, Miller, Gordon K wrote:
> Occasionally, we encounter a condition where the CPU system time increases dramatically (30% to 100% of total CPU time) for a period of several seconds to 10's of minutes. Using oprofile we observed that the majority of CPU time was being spent in gfs2_bitfit with rgblk_search and try_rgrp_unlink in the backtrace. Further instrumentation using SystemTap has shown try_rgrp_unlink being called repeatedly during the period of high system usage with durations averaging 400 milliseconds on each call. Often , try_rgrp_unlink will return the same inode as in previous calls. Attached is output  from oprofile and a SystemTap probe on the return from try_rgrp_unlink  with the number of times rgblk_search (rgblk_search_count) and gfs2_bitfit (bitfit_count) were called during this invocation of try_rgrp_unlink, the duration in seconds of the try_rgrp_unlink function, selected elements of the rgd structure and the returned inode (return->i_ino).  In this case, the behavior persisted fo!
 r 1
>  5 minutes beyond the capture listed here. The SystemTap scripts used in this capture follow the output. Our kernel version is 2.6.18-128.7.1 plus the patch to gfs2_bitfit contained in linux-2.6-gfs2-unaligned-access-in-gfs2_bitfit.patch.
> 
> Has anyone experienced this behavior?
> 
There are a couple of things which occur to me at this point. Firstly, I
wonder what size the rgrp bitmaps are on your filesystem. You can change
the sizes of them at mkfs time and that allows a trade off between the
size of each rgrp and the number of rgrps. Sometimes altering this can
make a difference to the time taken in bitmap searching.

Secondly, are you doing anything along the lines of holding inodes open
on one node and unlinking them on another node? There was also a bug
which was fixed in RHEL 5.4 (and improved again in 5.5) which meant that
the dcache was sometimes holding onto inodes longer than it should have.
That can also make the situation worse.

You should certainly get better results than you appear to be getting
here,

Steve.
 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux