Thank you for your attention. Fio for KRBD: [seq-write] description="seq-write" direct=1 ioengine=libaio filename=/dev/rbd0 numjobs=1 iodepth=256 group_reporting rw=write bs=4M size=10T runtime=180 */dev/rbd0 mapped by rbd_pool/image2, so KRBD & librbd fio test use the same image. Fio for librbd: [global] direct=1 numjobs=1 ioengine=rbd clientname=admin pool=rbd_pool rbdname=image2 invalidate=0 # mandatory rw=write bs=4M size=10T runtime=180 [rbd_iodepth32] iodepth=256 Image info: rbd image 'image2': size 50TiB in 13107200 objects order 22 (4MiB objects) data_pool: ec_rbd_pool block_name_prefix: rbd_data.8.148bb6b8b4567 format: 2 features: layering, data-pool flags: create_timestamp: Wed Nov 14 09:21:18 2018 * data_pool is a EC pool Pool info: pool 8 'rbd_pool' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 82627 flags hashpspool stripe_width 0 application rbd pool 9 'ec_rbd_pool' erasure size 6 min_size 5 crush_rule 4 object_hash rjenkins pg_num 256 pgp_num 256 last_change 82649 flags hashpspool,ec_overwrites stripe_width 16384 application rbd Rbd cache: Off (Because I think in tcmu , rbd cache will mandatory off, and our cluster will export disk by iscsi in furture.) Thanks!
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com