Re: Poor gluster performance on large files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes.
May I ask which version is this degradation happened? I tested 3.10 vs 3.12 performance a while ago and saw 2-3x performance lost with 3.12. Is it because of client-io-threads?

On Mon, Oct 30, 2017 at 1:44 PM, Karan Sandha <ksandha@xxxxxxxxxx> wrote:
Hi Brandon,

Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default.  

Thanks & Regards

On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon@xxxxxxxxxxxxxxxx> wrote:
Hi gluster users,
I've spent several months trying to get any kind of high performance out of gluster.  The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test).  Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s.  I have really been counting on / touting Gluster as being the way of the future for us.  However I can't justify cutting our performance to a mere 13% of non-gluster speeds.  I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :(
 
If anyone can please help me understand where I might be going wrong it would be absolutely wonderful!
 
Server 1:
Single E5-1620 v2
Ubuntu 14.04
glusterfs 3.10.5
16GB Ram
24 drive array on LSI raid
Sustained >1.5GB/s to XFS (77TB)
 
Server 2:
Single E5-2620 v3
Ubuntu 16.04
glusterfs 3.10.5
32GB Ram
36 drive array on LSI raid
Sustained >2.5GB/s to XFS (164TB)
 
Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files.
 
Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch.  Iperf throughput numbers are single stream >9000Mbit/s
 
Here is my current gluster performance:
 
Single brick on server 1 (server 2 was similar):
Fuse mount:
1000MB/s write
325MB/s read
 
Distributed only servers 1+2:
Fuse mount on server 1:
900MB/s write iozone 4 streams
320MB/s read iozone 4 streams
single stream read 91MB/s @64K, 141MB/s @1M
simultaneous iozone 4 stream 5G files
Server 1: 1200MB/s write, 200MB/s read
Server 2: 950MB/s write, 310MB/s read
 
I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good.
 
These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference):
performance.cache-size 1GB   (Default 23MB)
performance.client-io-threads on
performance.io-thread-count 64
performance.read-ahead-page-count       16
performance.stat-prefetch on
server.event-threads 8 (default?)
client.event-threads 8
 
Any help given is appreciated!

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



--

KARAN SANDHA

QUALITY ENGINEER

Red Hat Bangalore

ksandha@xxxxxxxxxx    M: 9888009555     IM: Karan on @irc


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux