On Wed, Mar 14, 2012 at 03:13:29PM -0400, Joseph Hardeman wrote: > Thank you for responding. So you aren't using the vol files in > /etc/glusterfs to control anything, such as afra or unity? Nope - indeed I have no idea what afra or unity are (and googling for "gluster afra unity" doesn't match anything useful) I have used the CLI utils as per the documentation, and everything "just works". There are a three files in /etc/glusterfs/ but they have not changed: $ ls -l /etc/glusterfs/ total 12 -rw-r--r-- 1 root root 229 2011-11-18 07:00 glusterd.vol -rw-r--r-- 1 root root 1908 2011-11-18 07:00 glusterfsd.vol.sample -rw-r--r-- 1 root root 2005 2011-11-18 07:00 glusterfs.vol.sample All the config changes are instead reflected under /etc/glusterd/ $ ls /etc/glusterd/ geo-replication glusterd.info nfs peers vols I see there's lots of *old* documentation for gluster <=2.x which talks about doing things manually, but the new documentation has been seriously dumbed down and doesn't even mention the module stacking configuration files. > I am just > asking because after building my own rpms and installing them, I was > able to build like I did before and I didn't see the high CPU usage. > Now the weird thing I saw was during a test failover and > stopping/starting glusterd on the first of pair I did see high cpu and > the vm's hung. Attaching strace to the gluster processes might give you an idea what's happening? Regards, Brian.