Re: Poor Gluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message -----
> From: "Lars Hanke" <debian@xxxxxxxxx>
> To: "Ben Turner" <bturner@xxxxxxxxxx>
> Cc: gluster-users@xxxxxxxxxxx
> Sent: Wednesday, February 18, 2015 5:09:19 PM
> Subject: Re:  Poor Gluster performance
> 
> Am 18.02.2015 um 22:05 schrieb Ben Turner:
> > ----- Original Message -----
> >> From: "Lars Hanke" <debian@xxxxxxxxx>
> >> To: gluster-users@xxxxxxxxxxx
> >> Sent: Wednesday, February 18, 2015 3:01:54 PM
> >> Subject:  Poor Gluster performance
> >>
> >> I set up a distributed, replicated volume consisting of just 2 bricks on
> >> two physical nodes. The nodes are peered using a dedicated GB ethernet
> >> and can be accessed from the clients using a separate GB ethernet NIC.
> >>
> >> Doing a simple dd performance test I see about 11 MB/s for read and
> >> write. Running a local setup, i.e. both bricks on the same machine and
> >> local mount, I saw even 500 MB/s. So network sould be the limiting
> >> factor. But using NFS or CIFS on the same network I see 110 MB/s.
> >>
> >> Is gluster 10 times slower than NFS?
> >
> > Something is going on there.  On my gigabit setups I see 100-120 MB / sec
> > writes for pure distribute and about 45-55 MB / sec with replica 2.  What
> > block size are you using?  I could see that if you were writing something
> > like 4k or under but 64k and up you should be getting about what I said.
> > Can you tell me more about your test?
> 
> Block size is 50M:
> 
> root@gladsheim:/# mount -t glusterfs node2:/test ~/mnt
> root@gladsheim:/# dd if=/dev/zero of=~/mnt/testfile.null bs=50M count=10
> 10+0 records in
> 10+0 records out
> 524288000 bytes (524 MB) copied, 46.6079 s, 11.2 MB/s
> root@gladsheim:/# dd if=~/mnt/testfile.null of=/dev/null bs=50M count=10
> 10+0 records in
> 10+0 records out
> 524288000 bytes (524 MB) copied, 45.7487 s, 11.5 MB/s
> 
> It doesn't depend on whether I use node1 or node2 for the mount.

Here is how I usually run:

[root@gqac022 gluster-mount]# time `dd if=/dev/zero of=/gluster-mount/test.txt bs=1024k count=1000; sync`
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.12639 s, 115 MB/s
real	0m9.205s
user	0m0.000s
sys	0m0.670s
    
[root@gqac022 gluster-mount]# sync; echo 3 > /proc/sys/vm/drop_caches 

[root@gqac022 gluster-mount]# dd if=./test.txt of=/dev/null bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 9.04464 s, 116 MB/s

And with your commands:

[root@gqac022 gluster-mount]# dd if=/dev/zero of=/gluster-mount/testfile.null bs=50M count=10
10+0 records in
10+0 records out
524288000 bytes (524 MB) copied, 5.00876 s, 105 MB/s

[root@gqac022 gluster-mount]# sync; echo 3 > /proc/sys/vm/drop_caches 

[root@gqac022 gluster-mount]# dd if=./testfile.null of=/dev/null bs=1024k count=1000
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 4.51992 s, 116 MB/s

Normally to troubleshoot these issues I break the storage stack into it's individual pieces and test each one.  Try running on the bricks outside gluster and see what you are getting.  What all tuning are you using?  Is anything nonstandard?  What are the disks?

-b

> 
> BTW: does the cut of the bandwidth to half in replicated mode mean that
> the client writes to both nodes, i.e. doubles the network load on the
> client side network? I hoped that replication would be run on the server
> side network.

Correct, replication is done client side by writing to both bricks.
 
> Regards,
>   - lars.
> 
> 
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux