I use fio with randwrite io to ceph image , it's run 2000 IOPS in the first time , and run 6000 IOPS in second time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




        In version 10.2.2, fio firstly run 2000 IOPS, then I break fio, and continue run fio, it run 6000 IOPS.
        But in version 0.94, fio always run 6000 IOPS. With or without repeated fio.

        what is the different between this two versions about this.

        my config is that :
        I have three nodes, and two osds per node. A total of six osds. All osds are ssd disk.

        Here is my ceph.conf of osd:
[osd]
osd mkfs type=xfs
osd data = "">
osd_journal_size = 80000
filestore xattr use omap = true
filestore min sync interval = 10
filestore max sync interval = 15
filestore queue max ops = 25000
filestore queue max bytes = 10485760
filestore queue committing max ops = 5000
filestore queue committing max bytes = 10485760000

journal max write bytes = 1073714824
journal max write entries = 10000
journal queue max ops = 50000
journal queue max bytes = 10485760000

osd max write size = 512
osd client message size cap = 2147483648
osd deep scrub stride = 131072
osd op threads = 8
osd disk threads = 4
osd map cache size = 1024
osd map cache bl size = 128
osd mount options xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
osd recovery op priority = 4
osd recovery max active = 10
osd max backfills = 4
        
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux