Re: I use fio with randwrite io to ceph image , it's run 2000 IOPS in the first time , and run 6000 IOPS in second time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It¹s probably rbd cache taking effect. If you know all your clients are
well behaved, you could set "rbd cache writethrough until flush" to false,
instead of the default true, but understand the ramification. You could
also just do it during benchmarking.

Warren Wang



From:  ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of
"m13913886148@xxxxxxxxx" <m13913886148@xxxxxxxxx>
Reply-To:  "m13913886148@xxxxxxxxx" <m13913886148@xxxxxxxxx>
Date:  Monday, August 1, 2016 at 11:30 PM
To:  Ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject:   I use fio with randwrite io to ceph image , it's
run 2000 IOPS in the first time , and run 6000 IOPS in second time



        In version 10.2.2, fio firstly run 2000 IOPS, then I break fio,
and continue run fio, it run 6000 IOPS.

        But in version 0.94, fio always run 6000 IOPS. With or without
repeated fio.


        what is the different between this two versions about this.


        my config is that :

        I have three nodes, and two osds per node. A total of six osds.
All osds are ssd disk.


        Here is my ceph.conf of osd:

[osd]

osd mkfs type=xfs
osd data = /data/$name
osd_journal_size = 80000
filestore xattr use omap = true
filestore min sync interval = 10
filestore max sync interval = 15
filestore queue max ops = 25000
filestore queue max bytes = 10485760
filestore queue committing max ops = 5000
filestore queue committing max bytes = 10485760000

journal max write bytes = 1073714824
journal max write entries = 10000
journal queue max ops = 50000
journal queue max bytes = 10485760000

osd max write size = 512
osd client message size cap = 2147483648
osd deep scrub stride = 131072
osd op threads = 8
osd disk threads = 4
osd map cache size = 1024
osd map cache bl size = 128
osd mount options xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
osd recovery op priority = 4
osd recovery max active = 10
osd max backfills = 4
        

This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential ***
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux