How to use ceph rados ioengine? How to config rados.job for parallel reading and writing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi List,

I am a beginner, and now I am testing performance of rados of ceph. I
use the ioengine of rados but when I used the example in Github, the
ceph pool is empty as following:

data:
    pools:   1 pools, 100 pgs
    objects: 0 objects, 0 bytes
    usage:   72611 MB used, 349 TB / 349 TB avail
    pgs:     100 active+clean

$ ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    349T      349T       72611M          0.02
POOLS:
    NAME       ID     USED     %USED     MAX AVAIL     OBJECTS
    ecpool     1         0         0          221T           0

And my configure file rados.job is as following:

[global]
ioengine=rados
clientname=admin
pool=ecpool
busy_poll=0
rw=write
bs=4M

[job1]
size=100G
io_size=100G
iodepth=2048

I want to know whether the result that fio got could be considered
effective without any ojbect been written in the pool.

And I want to ask what I need to do if I want to test parallel reading
and write of rados. Should I create N pools in ceph and create N jobs
in fio configure file?

Thanks.

Regards,
Shaonan Ma
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux