Hello, I've got a 5 node GFS cluster (RHEL3u4, GFS 6.0.2-24, kernel 2.4.21-24.0.1) with 3 volumes, one of which is approx 500GB and contains several thousand small files. When I do a find on that volume, or slocate is run via it's cron job, or I rsync that volume, the node used to access the volume get's into a state where it cannot fork anymore and nothing can be done with the machine until it is restarted (usually requiring a "fence_node" from another machine). The cluster is configured with 3 of the nodes acting as lock managers, using DL360's with 2GB ram each and qlogic 2342 dual port cards connected to an msa1000. The journals are not on there own volumes and the defaults are used for mounting. Is this a known problem? I've searched for other posts with this problem but have not had any luck with it. Any ideas as to what might be causing this? Thanks Corey -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster