Hi Jenn, You may not have seen the posts, but small files do not, as a general rule, do well on parallel file systems. There are numerous posts on this subject on this list concerning this subject, and the Gluster developers have devoted a good bit of energy into trying to address this, but ... this is not a general purpose file system. It is designed to be efficient with large(r) file sizes. Throughput is (hopefully) limited by the bandwidth available - but latency comes is a factor as well in determining throughput. Others on this list can give you much better details. James DISCLAIMER: This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently delete the original and any copy of any e-mail and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission. NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications. http://www.knight.com -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Jenn Fountain Sent: Tuesday, May 04, 2010 1:32 PM To: gluster-users at gluster.org Subject: Performance Issue with Webserver We are running our webapp on a gluster mount. We are finding that performance is a lot slower than local disk. We expected it to be slower but not this much slower. So, I am looking to you for some guidance on what to do. IE: Not run off the gluster mount or change config settings, etc. Here are some numbers on performance: Gluster Mount html: Document Path: /tmp/test.html Document Length: 17 bytes Concurrency Level: 1 Time taken for tests: 0.269 seconds Complete requests: 1 Failed requests: 0 Write errors: 0 Total transferred: 302 bytes HTML transferred: 17 bytes Requests per second: 3.72 [#/sec] (mean) Time per request: 268.621 [ms] (mean) Time per request: 268.621 [ms] (mean, across all concurrent requests) Transfer rate: 1.10 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 16 16 0.0 16 16 Processing: 253 253 0.0 253 253 Waiting: 253 253 0.0 253 253 Total: 269 269 0.0 269 269 Local disk html: Document Path: /tmp2/test.html Document Length: 16 bytes Concurrency Level: 1 Time taken for tests: 0.035 seconds Complete requests: 1 Failed requests: 0 Write errors: 0 Total transferred: 301 bytes HTML transferred: 16 bytes Requests per second: 28.24 [#/sec] (mean) Time per request: 35.409 [ms] (mean) Time per request: 35.409 [ms] (mean, across all concurrent requests) Transfer rate: 8.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 20 20 0.0 20 20 Processing: 16 16 0.0 16 16 Waiting: 16 16 0.0 16 16 Total: 35 35 0.0 35 35 Here is my client config: ## file auto generated by ./glusterfs-volgen (mount.vol) # Cmd line: # $ ./glusterfs-volgen -n live-glusterfs-001 -r 1 xxx:/.glusterfs/sb01 xxx:/.glusterfs/sb01 xxx:/.glusterfs/sb01 # RAID 1 # TRANSPORT-TYPE tcp volume xxx-1 type protocol/client option transport-type tcp option remote-host xxx option transport.socket.nodelay on option transport.remote-port 6996 option remote-subvolume brick1 end-volume volume xxx-1 type protocol/client option transport-type tcp option remote-host xxx option transport.socket.nodelay on option transport.remote-port 6996 option remote-subvolume brick1 end-volume volume xxx-1 type protocol/client option transport-type tcp option remote-host xxx option transport.socket.nodelay on option transport.remote-port 6996 option remote-subvolume brick1 end-volume volume mirror-0 type cluster/replicate subvolumes xxx-1 xxx-1 xxx-1 end-volume #volume writebehind # type performance/write-behind # option cache-size 4MB # subvolumes mirror-0 #end-volume volume readahead type performance/read-ahead option page-count 4 subvolumes mirror-0 #subvolumes writebehind end-volume volume iocache type performance/io-cache option cache-size 1GB option cache-timeout 1 subvolumes readahead end-volume volume quickread type performance/quick-read option cache-timeout 1 option max-file-size 64kB subvolumes iocache end-volume volume statprefetch type performance/stat-prefetch subvolumes quickread end-volume volume iothreads type performance/io-threads option thread-count 16 subvolumes statprefetch end-volume If you need any other information, let me know. Thank you in advance. -Jenn _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users