Event threads effect

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everybody, 

I am trying to benchmark my cluster with fio's gfapi ioengine and evaluate effect of various volume options on performance. I have so far observed following:

1) *thread* options do not affect performance or thread count - htop always show 2 threads on client, there are always 16 glusterfsd threads on server
2) running the same test locally (on the brick) shows up to 5x better throughput than over 10GBe (MTU 9000, iperfed, pinged with DF set, no drops on switches or cards, tcpdumped to check network issues)
3) performance.cache-size value has no effect on performance (32MB or 1GB)

I would expect raising client threads number leading to more TCP connections, higher disk utilization and throughput. If I run multiple fio jobs (numjobs=8), I am able to saturate the network link. 

Is this normal or I am missing something really badly?

fio config:

[global]
name=gfapi test
create_on_open=1
volume=test3-vol
brick=gfs-3.san
ioengine=gfapi
direct=1
bs=256k
rw=read
iodepth=1
numjobs=1
size=8192m
loops=1
refill_buffers=1
[job1]

reconfigured volume options:
performance.client-io-threads: on
performance.cache-size: 1GB
performance.read-ahead: off
server.outstanding-rpc-limit: 128
performance.io-thread-count: 16
server.event-threads: 16
client.event-threads: 16
nfs.disable: on
transport.address-family: inet
performance.readdir-ahead: on

-ps
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux