gluster NFS proces takes 100% cpu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've set up a 2 brick replicate system, using bonded GigE.

eth0 - management
eth1 & eth2 - bonded 192.168.20.x
eth3 & eth4 - bonded 192.168.10.x

I created the replicate over the 192.168.10 interfaces.

# gluster volume info
 
Volume Name: raid5
Type: Replicate
Volume ID: 02b24ff0-e55c-4f92-afa5-731fd52d0e1a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: filer-1:/gluster-exported/raid5/data
Brick2: filer-2:/gluster-exported/raid5/data
Options Reconfigured:
performance.nfs.stat-prefetch: on
performance.nfs.io-cache: on
performance.nfs.read-ahead: on
performance.nfs.io-threads: on
nfs.trusted-sync: on
performance.cache-size: 13417728
performance.io-thread-count: 64
performance.write-behind-window-size: 4MB
performance.io-cache: on
performance.read-ahead: on

I attached an NFS client across the 192.168.20 interface.  The NFS works fine.  Under load, though, I get 100% CPU usage of the nfs process and lose connectivity.

My plan was to replicate across the 192.168.10 bond as well as do gluster mounts.  The NFS mount on 192.168.20 was to keep NFS traffic off the gluster link.

Is this a supported configuration?  Does anyone else do this?

Gerald

-- 
Gerald Brandt
Majentis Technologies
gbr@xxxxxxxxxxxx
204-229-6595
www.majentis.com
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux