Hi All:
Time to move from functionality (i.e. it is working), to performance
(need to speed it up now).
Basically, I have a one server that can run NFS and gluster, and one
client that can access that server (either NFS or Glusterfs). In a
simple set of tests (time of rsyncs, and then time of rm's), I see the
following:
For NFS: rsync time is 0m52.304s and rm time is 0m12.615s
For Glusterfs: rsync time is 1m29.312s and rm time is 0m33.901s
(these were the fastest gluster times, slowest for the rsync was
5m23.9s, depending on the perfomance translators I used)
So, how do I improve Gluster, which is running about twice as slow as NFS?
Here is my background info
*Gluster Server:*
No messages in the log files (either gluster's log file or
/var/log/messages)
glusterfsd -V
glusterfs 1.3.1
spec file:
volume brick
type storage/posix
option directory /nfs/gluster
end-volume
volume server
type protocol/server
option transport-type tcp/server
subvolumes brick
option auth.ip.brick.allow *
end-volume
machine's OS = Scientific Linux SL release 3.0.8 (SL)
2.4.21-47.0.1.ELsmp #1 SMP Thu Oct 19 10:38:33 CDT 2006 x86_64 x86_64
x86_64 GNU/Linux
Gluster Client:
No messages in the log files (either gluster's log file or
/var/log/messages)
glusterfs -V
glusterfs 1.3.1
fuse version is fuse-2.7.0-glfs3
spec file:
volume client
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 135.1.29.152 # IP address of the remote brick
option remote-subvolume brick # name of the remote volume
end-volume
#### Add readahead feature
volume readahead
type performance/read-ahead
option page-size 128KB # unit in bytes
subvolumes client
end-volume
### Add IO-Threads feature
volume iothreads
type performance/io-threads
option thread-count 4 # deault is 1
option cache-size 64MB
subvolumes readahead
end-volume
machine's OS = CentOS release 5 (Final)
2.6.18-2.6.18-8.1.8.el5.028stab039.1.prj4 #1 SMP Mon Aug 13 16:31:27 CDT
2007 i686 athlon i386 GNU/Linux
thanks,
Paul Jochum