Try using this tool to benchmark the underlying disks. https://github.com/louwrentius/fio-plot 1. The disk will need to be removed from Ceph, wiped, and a filesystem placed on it. (I do see a change note about RBD support; in this case, I recommend benchmarking one disk instead of the cluster) 2. Run fio in all modes, this will result in a folder with many test results. Use fio-plot to ingest this folder and chart the results. Glhf * Eli From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> Date: Wednesday, October 2, 2024 at 2:34 AM To: Sridhar Seshasayee <sseshasa@xxxxxxxxxx> Cc: Ceph Users <ceph-users@xxxxxxx> Subject: Re: Is there a way to throttle faster osds due to slow ops? *** External email: use caution *** Yes, thank you, I wanted to know actually is it safe to divide by 4 all, but seems like after my test still bad slow ops and no affect at all. ________________________________ From: Sridhar Seshasayee <sseshasa@xxxxxxxxxx> Sent: Tuesday, October 1, 2024 5:22 PM To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> Cc: Ceph Users <ceph-users@xxxxxxx> Subject: Re: Re: Is there a way to throttle faster osds due to slow ops? Email received from the internet. If in doubt, don't click any link nor open any attachment ! ________________________________ Yes, you can override the capacity using "config set osd.N osd_mclock_max_capacity_iops_ssd <new_value>". On Tue, Oct 1, 2024 at 3:45 PM Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>> wrote: Digged a bit further, seems like the osd_mclock_max_capacity_iops_ssd in config db which comes from ceph bench determined by 1 osd, however if I have 4osd on my 15TB nvme and I run the bench in parallel on 1 nvme drive 4 osds, the result is /4. Is it safe to divide this value by 4 in the config db? ________________________________ From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx<mailto:Istvan.Szabo@xxxxxxxxx>> Sent: Tuesday, October 1, 2024 1:47 PM To: Ceph Users <ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>> Subject: Is there a way to throttle faster osds due to slow ops? Hi, We have extended our clusters with some new nodes and currently it is impossible to remove from any old node the nvme drive which holding the index pool in the cluster without generating slow ops and cluster performance degradation. Currently how I want to remove is in quincy non cephadm cluster is to crush reweight to 0 and remove. This data movement makes slow ops all the way during the nvme osd out. In my opinion it might be generated from the faster drives harder push on the old servers nvme which makes high iowait on the old nvmes so I want to somehow throttle the new nvmes. Not sure with mclock or with any wait is it possible? (max backfill, osd recover ops and recovery ops priotity is already 1 and balancer max misplaced ratio 0.01). This some of the slow osd says during remove: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2FBadb0yBadb0y%2F15b51e524a47dfbd2728bbabc18238fc%23file-gistfile1-txt&data=05%7C02%7Celi.tarrago%40lexisnexisrisk.com%7Cb62b2f9b77084d3505dd08dce2ac48e1%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638634476816761892%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=nVypR7u9BAfa8WTGaTjK%2FuPyrVHnjU4w9uIb09STahs%3D&reserved=0<https://gist.github.com/Badb0yBadb0y/15b51e524a47dfbd2728bbabc18238fc#file-gistfile1-txt> 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.583707809s, txc = 0x55af2bd2e300 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.541916847s, txc = 0x55af1a035b00 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.533919334s, txc = 0x55af19fafb00 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.904534340s, txc = 0x55af49814c00 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.911001205s, txc = 0x55af24b19800 2024-10-01T11:46:29.601+0700 7f29bf4f8640 0 bluestore(/var/lib/ceph/osd/ceph-91) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.597061634s, txc = 0x55af4fe0fb00 2024-10-01T11:46:30.889+0700 7f29becf7640 4 rocksdb: [db/db_impl/db_impl_write.cc:1736] [default] New memtable created with log file: #280327. Immutable memtables: 0. 2024-10-01T11:46:30.889+0700 7f29becf7640 4 rocksdb: [db/column_family.cc:983] [default] Increasing compaction threads because we have 18 level-0 files 2024-10-01T11:46:30.889+0700 7f29c4512640 4 rocksdb: (Original Log Time 2024/10/01-11:46:30.893378) [db/db_impl/db_impl_compaction_flush.cc:2394] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 2, flush slots scheduled 1, compaction slots scheduled 2 2024-10-01T11:46:30.889+0700 7f29c4512640 4 rocksdb: [db/flush_job.cc:335] [default] [JOB 5604] Flushing memtable with next log file: 280327 2024-10-01T11:46:30.889+0700 7f29c4512640 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1727757990893428, "job": 5604, "event": "flush_started", "num_memtables": 1, "num_entries": 2437269, "num_deletes": 2384624, "total_data_size": 233695787, "memory_usage": 278437952, "flush_reason": "Write Buffer Full"} 2024-10-01T11:46:30.889+0700 7f29c4512640 4 rocksdb: [db/flush_job.cc:364] [default] [JOB 5604] Level-0 flush table #280328: started Thank you ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx> -- Sridhar Seshasayee Partner Engineer Red Hat<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F&data=05%7C02%7Celi.tarrago%40lexisnexisrisk.com%7Cb62b2f9b77084d3505dd08dce2ac48e1%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638634476816786239%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=%2BfpBc%2Bs%2Fqc3N71I0XKiyz%2F4ilWwES1P6wHZGdoA2FcQ%3D&reserved=0<https://www.redhat.com/>> [https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmarketing-outfit-prod-images.s3-us-west-2.amazonaws.com%2Ff5445ae0c9ddafd5b2f1836854d7416a%2FLogo-RedHat-Email.png&data=05%7C02%7Celi.tarrago%40lexisnexisrisk.com%7Cb62b2f9b77084d3505dd08dce2ac48e1%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638634476816798669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=MN3JkXvoYEi7r5IEsBHW%2FqTZReP9mR8POsmq19elhFo%3D&reserved=0]<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F&data=05%7C02%7Celi.tarrago%40lexisnexisrisk.com%7Cb62b2f9b77084d3505dd08dce2ac48e1%7C9274ee3f94254109a27f9fb15c10675d%7C0%7C0%7C638634476816810774%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=5Gw6tRVb8syRTy0KA4%2B1QpbEjUaOPSoCfnyA1Ufj%2BQE%3D&reserved=0<https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c9ddafd5b2f1836854d7416a/Logo-RedHat-Email.png>> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx ________________________________ The information contained in this e-mail message is intended only for the personal and confidential use of the recipient(s) named above. This message may be an attorney-client communication and/or work product and as such is privileged and confidential. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail, and delete the original message. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx