Re: Event threads effect

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just a quick follow up - I had to somehow miss the updated thread count because fio (client side) and brick show appropriate number of threads if I raise count above 16. Seems like minimal thread count is always spawned at least on the server which probably confused me. Unfortunately the connection count to the brick is still the same (2). 

However the read perfomance is still the same. strace on fio threads shows reads (same on the server side), so the workload is somehow distributed between them, but performance is very different from running multiple jobs (numjobs > 1). Does anyone know how fio resp. gfapi ioengine uses multiple threads?

The 200MB/s I get on client vs 1.5GB/s if the same test is run locally on the server so far seems to be caused by the network latency, but could the client be advised to open multiple connections (maybe one per thread)? netstat reports 2 connections per fio process and raising numjobs results in more connections and maxes out brick utilization. 

My goal is to max out IO performance of QEMU/KVM guests. So far only approximately 200MB/s are possible with for certain block sizes.


-ps

On Mon, Nov 7, 2016 at 4:47 PM, Pavel Szalbot <pavel.szalbot@xxxxxxxxx> wrote:
Hi everybody, 

I am trying to benchmark my cluster with fio's gfapi ioengine and evaluate effect of various volume options on performance. I have so far observed following:

1) *thread* options do not affect performance or thread count - htop always show 2 threads on client, there are always 16 glusterfsd threads on server
2) running the same test locally (on the brick) shows up to 5x better throughput than over 10GBe (MTU 9000, iperfed, pinged with DF set, no drops on switches or cards, tcpdumped to check network issues)
3) performance.cache-size value has no effect on performance (32MB or 1GB)

I would expect raising client threads number leading to more TCP connections, higher disk utilization and throughput. If I run multiple fio jobs (numjobs=8), I am able to saturate the network link. 

Is this normal or I am missing something really badly?

fio config:

[global]
name=gfapi test
create_on_open=1
volume=test3-vol
brick=gfs-3.san
ioengine=gfapi
direct=1
bs=256k
rw=read
iodepth=1
numjobs=1
size=8192m
loops=1
refill_buffers=1
[job1]

reconfigured volume options:
performance.client-io-threads: on
performance.cache-size: 1GB
performance.read-ahead: off
server.outstanding-rpc-limit: 128
performance.io-thread-count: 16
server.event-threads: 16
client.event-threads: 16
nfs.disable: on
transport.address-family: inet
performance.readdir-ahead: on

-ps

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux