client-side cpu usage, performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I experienced some embarrassingly bad performance today from a two-node 
AFR used by two clients to store and share PHP sessions. (I ended up 
switching to NFS by the end of the day.)  It was on average a few 
thousand sessions with a good smattering of create/write/read with 
pretty high concurrency due to some thousands of hits per minute.

I played with settings galore from threading to caching to writeback 
caching to client io threads and got about nowhere.  The symptoms are 
extremely latent i/o requests and high client-side CPU usage but little 
if any server-side usage and no actual disk i/o to speak of.

All four nodes are virtualized RHEL 5 instances connected over Gbit. 
The last-used configs are below.  Any ideas?

Server:

volume php-sessions
   type storage/posix
   option directory /var/glusterfs/php-sessions
end-volume
volume php-sessions-locks
     type features/locks
     option mandatory-locks on
     subvolumes php-sessions
end-volume
volume php-sessions-brick
   type performance/io-threads
   option thread-count 16 # default is 16
   subvolumes php-sessions-locks
end-volume
volume server
     type protocol/server
     option transport-type tcp
     option transport.socket.nodelay on
     option auth.addr.php-sessions-brick.allow 1.2.3.4,1.2.3.5
     option listen-port 6996
     subvolumes php-sessions-brick
end-volume

Client:

volume gluster0
     type protocol/client
     option transport-type tcp
     option remote-host gluster0
     option remote-port 6996
     option transport.socket.nodelay on
     option remote-subvolume php-sessions-brick
end-volume
volume gluster1
     type protocol/client
     option transport-type tcp
     option remote-host gluster1
     option remote-port 6996
     option transport.socket.nodelay on
     option remote-subvolume php-sessions-brick
end-volume
volume mirror-0
     type cluster/replicate
     subvolumes gluster0 gluster1
end-volume
volume writeback
     type performance/write-behind
     option window-size 1MB
     subvolumes mirror-0
end-volume
volume io-cache
     type performance/io-cache
     option cache-size 512MB
     subvolumes writeback
end-volume
volume iothreads
   type performance/io-threads
   option thread-count 4 # default is 16
   subvolumes io-cache
end-volume


TIA,
   John





-- 
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden at ivytech.edu


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux