Slow write performance for small block sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

I experience very slow performance on gluster 3.7.8 on CentOS 7.2 when using small block sizes with dd. 

I run 2 gluster servers with 2 bricks each (each located on separate SSDs) and 2 gluster clients (which will run virtual machines). It's a completely fresh setup, so no load on the systems. The systems are connected via 1 GBit network. iperf3 shows 940 MBit throughput, ping shows less than 1ms latency, so network should be fine.

The volume is a stripe 2 replica 2 volume. I've only set the volume options for ping-timeout and owner-uid/gid (as required by qemu). The volume is mounted via fuse client. 

He're my stats as seen from a client (370MB test file). 'storage1' is one of the glusterfs servers which also has a fuse mount for these tests:


# read (always done echo 3 > /proc/sys/vm/drop_caches before running tests)

$ dd if=/glustermount/testfile of=/localdisk/testfile bs=4K
69.7 MB/s

$ dd if=/glustermount/testfile of=/localdisk/testfile bs=8K
78.8 MB/s

$ scp storage1:/glustermount/testfile /localdisk/testfile
83.3 MB/s

$ ssh storage1 'dd if=/glustermount/testfile bs=8K' | dd of=/localdisk/testfile bs=8K
93.3 MB/s


# write

$ dd if=/localdisk/testfile of=/glustermount/testfile bs=4K
3.8 MB/s

$ dd if=/localdisk/testfile of=/glustermount/testfile bs=8K
6.9 MB/s

$ dd if=/localdisk/testfile of=/glustermount/testfile bs=8M
58.8 MB/s

$ dd if=/localdisk/testfile of=/glustermount/testfile bs=64M
58.8 MB/s

$ scp /localdisk/testfile storage1:/localdisk/testfile
94 MB/s

$ scp /localdisk/testfile storage1:/glustermount/testfile
980 KB/s

# immediately ran after the previous one. what happened?!?!
$ scp /localdisk/testfile storage1:/glustermount/testfile  
75 MB/s

$ dd if=/localdisk/testfile bs=8K | ssh storage1 'dd of=/localdisk/testfile bs=8K' 
96.6 MB/s

$ dd if=/localdisk/testfile bs=8K | ssh storage1 'dd of=/glustermount/testfile bs=8K' 
225 KB/s

Is this really as good as it gets with gluster? I've tried the following setups in order to improve write speed for small block sizes:

- tuning stripe-size. smaller (16KB) and larger (4MB) values
- 'replica 2' only setup (i.e. no striping)
- write-behind on/off
- larger write-behind-window-size 
- larger cache-size
- more io-threads

None of these helped or even changed the numbers much. I've also tried a few sysctl tweaks, but still not much change.

So I'd like to know:

- is this normal speed/behaviour?
- what are the reasons for this?
- can anything be done to improve performance? 
- is running VMs on top of gluster a supported/intended use case at all?

Thanks a lot,
Andreas














_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux