On Mon, Oct 15, 2007 at 02:09:30PM +0530, Anand Avati wrote: > Copying gluster-devel@ > > From: Brian Taber <btaber@xxxxxxxxxxxxx> > Date: Oct 15, 2007 3:59 AM > Subject: Re: Performance > To: Anand Avati <avati@xxxxxxxxxxxxx> > > Now there's a beautiful thing.... > > NFS Write: > time dd if=/dev/zero bs=65536 count=15625 of=/shared/1Gb.file > 1024000000 bytes (1.0 GB) copied, 62.9818 seconds, 16.3 MB/s > > Gluster Write: > time dd if=/dev/zero bs=65536 count=15625 of=/mnt/glusterfs/1Gb.file > 1024000000 bytes (1.0 GB) copied, 41.74 seconds, 24.5 MB/s > > NFS Read: > time dd if=/shared/1Gb.file bs=65536 count=15625 of=/dev/zero > 1024000000 bytes (1.0 GB) copied, 44.4734 seconds, 23.0 MB/s > > Gluster Read: > time dd if=/mnt/glusterfs/1Gb.file bs=65536 count=15625 of=/dev/zero > 1024000000 bytes (1.0 GB) copied, 42.1526 seconds, 24.3 MB/s > > > this test is performed within a VMWare virtual machine, so network speed > isn't as good. I tried it from outsite, 1000MB network: > > > > NFS Write: > time dd if=/dev/zero bs=65536 count=15625 of=/shared/1Gb.file > 1024000000 bytes (1.0 GB) copied, 27.619 seconds, 37.1 MB/s > > Gluster Write: > time dd if=/dev/zero bs=65536 count=15625 of=/mnt/glusterfs/1Gb.file > 1024000000 bytes (1.0 GB) copied, 11.1978 seconds, 91.4 MB/s > > NFS Read: > time dd if=/shared/1Gb.file bs=65536 count=15625 of=/dev/zero > 1024000000 bytes (1.0 GB) copied, 43.5323 seconds, 23.5 MB/s > > Gluster Read: > time dd if=/mnt/glusterfs/1Gb.file bs=65536 count=15625 of=/dev/zero > 1024000000 bytes (1.0 GB) copied, 30.6922 seconds, 33.4 MB/s What's not so beautiful is that the first dd (always nfs) does include staging of the file from the input media into buffer cache (/dev/zero means: filling memory with zero bytes, which certainly is faster than reading from a physical disk). I would have repeated the write tests to see whether ordering is important: - nfs write - glusterfs write - nfs write again - glusterfs write again Buffers are often able to fool the benchmarker. Also some information about your machine is missing - but I suppose 1GB would easily fit into main memory. What about *several* GBs to effectively trash the page cache? Cheers, Steffen, always doubtful when it comes to benchmarks -- Steffen Grunewald * MPI Grav.Phys.(AEI) * Am Mühlenberg 1, D-14476 Potsdam Cluster Admin * http://pandora.aei.mpg.de/merlin/ * http://www.aei.mpg.de/ * e-mail: steffen.grunewald(*)aei.mpg.de * +49-331-567-{fon:7233,fax:7298} No Word/PPT mails - http://www.gnu.org/philosophy/no-word-attachments.html