@MarcThanks Marc,I am executing your profile via fio. I will send you the result. But what is consuming me is: tell bench sometimes return for example 2~10 and some times return 170~200.If the disk is burned out why sometimes return higher value?Currently this OSD is weighted 0. So there are no any load of ceph replication, deep scrubbing and so on. Thanks. On Saturday, December 24, 2022 at 06:57:05 PM GMT+3:30, Marc <marc@xxxxxxxxxxxxxxxxx> wrote: > > In my cluster, there are several OSDs of type ordinary SSD with very > slow iops. I think there have been several posts here about ordinary ssd's becoming slow under specific conditions. Why do you think your 'ordinary ssds' do not have this? What does fio say about these disks? I think this issue was related to how long these disks were under load, so maybe increase the runtime in this fio script. [global] ioengine=libaio #ioengine=posixaio invalidate=1 ramp_time=30 iodepth=1 runtime=180 time_based direct=1 filename=/dev/sdX #filename=/mnt/disk/fio-bench.img [write-4k-seq] stonewall bs=4k rw=write [randwrite-4k-seq] stonewall bs=4k rw=randwrite fsync=1 [read-4k-seq] stonewall bs=4k rw=read [randread-4k-seq] stonewall bs=4k rw=randread fsync=1 [rw-4k-seq] stonewall bs=4k rw=rw [randrw-4k-seq] stonewall bs=4k rw=randrw [randrw-4k-d4-seq] stonewall bs=4k rw=randrw iodepth=4 [randread-4k-d32-seq] stonewall bs=4k rw=randread iodepth=32 [randwrite-4k-d32-seq] stonewall bs=4k rw=randwrite iodepth=32 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx