We already have #2 setup that way. Also we are not flooding the network. I am copying data from with the storage server. Also. Here is what my gfs_tool df displays. I am wondering if the inodes and such being so low will cause an issue. I would think that is shouldn't be what it is. SB lock proto = "lock_dlm" SB lock table = "SAN1:VserversFS" SB ondisk format = 1309 SB multihost format = 1401 Block size = 4096 Journals = 4 Resource Groups = 2794 Mounted lock proto = "lock_dlm" Mounted lock table = "SAN1:VserversFS" Mounted host data = "" Journal number = 0 Lock module flags = Local flocks = FALSE Local caching = FALSE Oopses OK = FALSE Type Total Used Free use% ------------------------------------------------------------------------ inodes 5 5 0 100% metadata 66 66 0 100% data 182996209 0 182996209 0% Michael Conrad Tadpol Tilstra wrote: >On Tue, Jul 05, 2005 at 02:32:00PM -0400, Scott.Money@xxxxxxxxxxxxx wrote: > > >> We are seeing a similar issue. We have a 3 node gfs system that >> uses a gnbd server as storage. We originally ran into this problem >> quite frequently, but hard-setting our NICs to 100Mbit full duplex >> has limited the system freezes to "large" data transfers. (e.g. >> copying 500mb files via scp or creating 500mb Oracle tablespaces). >> The good news is that the fencing works ;-) >> Let me know if you get any information about this. >> >> > >What you describe here sounds more like flooding of the network. If you >send too much data over the same network device as the heartbeat&locking >traffic, you can starve out the heatbeats. There was a bunch of emails >about this already on this list. The way to deal with it is one of >1: don't ever flood the network, 2: use a provate network for heartbeats >& lock traffic, 3: use the traffic shaping kernel modules to provide a >garunteed bandwidth for the heartbeat & locking traffic. > > > >------------------------------------------------------------------------ > >-- > >Linux-cluster@xxxxxxxxxx >http://www.redhat.com/mailman/listinfo/linux-cluster > -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster