> > Are your files split brained: > > gluster v heal img info split-brain > > I see alot of problem with your self heal daemon connecting: As far as I can see nodes are not split brained: # gluster v heal img info split-brain Gathering list of split brain entries on volume img has been successful Brick gluster1:/var/gl/images Number of entries: 0 Brick gluster2:/var/gl/images Number of entries: 0 Brick gluster3:/var/gl/images Number of entries: 0 Brick gluster4:/var/gl/images Number of entries: 0 Brick gluster5:/var/gl/images Number of entries: 0 Brick gluster6:/var/gl/images Number of entries: 0 > $ service glusterd stop > $ killall glusterfs > $ killall glusterfsd > $ ps aux | grep glu <- Make sure evertyhing is actually cleaned up Yes, I actually did this in the first place with problematic nodes. Unfortunately it did'nt help. CPU load came back in about 3-4 minutes. > Have you recently run a rebalance? Rebalance was running when the problem occur and I stopped it to see if it caused problems. I try to run it again. > Are you having trouble access those directories? It looks like the fix layout failed for those two. I can access those dirs via gluster-client: # grep gluster /etc/fstab gluster1:/img /media glusterfs defaults,_netdev 0 1 # ls -la /media/www/ | wc -l 47 /www/thumbs have excessive amount of files so i just stat something inside: # ls -l /media/www/thumbs/1000025.jpg -rw-r--r-- 1 apache apache 4365 Oct 8 2009 /media/www/thumbs/1000025.jpg Everything looks fine. Thank you, Alex _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users