Hello Jason, I can confirm that your tests work on our cluster with a newly created image. We still can’t get the current images to use a different object pool. Do you think that maybe another feature is incompatible with this feature? Below is a log of the issue. :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a': size 51200 MB in 12800 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.37c8974b0dc51 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Sat May 5 11:39:07 2018 :~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a :~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling --journal-pool RBD_SSD :~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a rbd journal '37c8974b0dc51': header_oid: journal.37c8974b0dc51 object_oid_prefix: journal_data.1.37c8974b0dc51. order: 24 (16384 kB objects) splay_width: 4 ***************<NOTE NO object_pool> **************** :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a': size 51200 MB in 12800 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.37c8974b0dc51 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling flags: create_timestamp: Sat May 5 11:39:07 2018 journal: 37c8974b0dc51 mirroring state: disabled Kind regards, Glen Baars From: Jason Dillaman <jdillama@xxxxxxxxxx>
On Sun, Aug 12, 2018 at 12:13 AM Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> wrote:
You won't see any journal objects in the SSDPOOL until you issue a write: $ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test $ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M rbd_hdd/test --rbd-cache=false bench type write io_size 4096 io_threads 16 bytes 16777216 pattern random SEC OPS OPS/SEC BYTES/SEC 1 320 332.01 1359896.98 2 736 360.83 1477975.96 3 1040 351.17 1438393.57 4 1392 350.94 1437437.51 5 1744 350.24 1434576.94 6 2080 349.82 1432866.06 7 2416 341.73 1399731.23 8 2784 348.37 1426930.69 9 3152 347.40 1422966.67 10 3520 356.04 1458356.70 11 3920 361.34 1480050.97 elapsed: 11 ops: 4096 ops/sec: 353.61 bytes/sec: 1448392.06 $ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd $ rbd journal info --pool rbd_hdd --image test rbd journal '10746b8b4567': header_oid: journal.10746b8b4567 object_oid_prefix: journal_data.2.10746b8b4567. order: 24 (16 MiB objects) splay_width: 4 object_pool: rbd_ssd $ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M rbd_hdd/test --rbd-cache=false bench type write io_size 4096 io_threads 16 bytes 16777216 pattern random SEC OPS OPS/SEC BYTES/SEC 1 240 248.54 1018005.17 2 512 263.47 1079154.06 3 768 258.74 1059792.10 4 1040 258.50 1058812.60 5 1312 258.06 1057001.34 6 1536 258.21 1057633.14 7 1792 253.81 1039604.73 8 2032 253.66 1038971.01 9 2256 241.41 988800.93 10 2480 237.87 974335.65 11 2752 239.41 980624.20 12 2992 239.61 981440.94 13 3200 233.13 954887.84 14 3440 237.36 972237.80 15 3680 239.47 980853.37 16 3920 238.75 977920.70 elapsed: 16 ops: 4096 ops/sec: 245.04 bytes/sec: 1003692.81 $ rados -p rbd_ssd ls | grep journal_data.2.10746b8b4567. journal_data.2.10746b8b4567.3 journal_data.2.10746b8b4567.0 journal_data.2.10746b8b4567.2 journal_data.2.10746b8b4567.1
If you are trying to optimize for 128KiB writes, you might need to tweak the "rbd_journal_max_payload_bytes" setting since it currently is defaulted to split journal write events into a maximum of 16KiB payload [1] in order to optimize
the worst-case memory usage of the rbd-mirror daemon for environments w/ hundreds or thousands of replicated images.
-- Jason |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com