Hi, The cache was gone, optimize is proceed. This is not enterprise device, you should never use it with Ceph 🙂 k Sent from my iPhone > On 27 Dec 2022, at 16:41, hosseinz8050@xxxxxxxxx wrote: > > Thanks AnthonyI have a cluster with QLC SSD disks (Samsung QVO 860). The cluster works for 2 year. Now all OSDs return 12 iops when running tell bench which is very slow. But I Buy new QVO disks yesterday, and I added this new disk to cluster. For the first 1 hour, I got 100 iops from this new OSD. But after 1 Hour, this new disk (OSD) returns to iops 12 again as the same as other OLD OSDs.I can not imagine what happening?!! > On Tuesday, December 27, 2022 at 12:18:07 AM GMT+3:30, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote: > > My understanding is that when you ask an OSD to bench (via the admin socket), only that OSD executes, there is no replication. Replication is a function of PGs. > > Thus, this is a narrowly-focused tool with both unique advantages and disadvantages. > > > >> On Dec 26, 2022, at 12:47 PM, hosseinz8050@xxxxxxxxx wrote: >> >> Hi experts,I want to know, when I execute ceph tell osd.x bench command, is replica 3 considered in the bench or not? I mean, for example in case of replica 3, when I executing tell bench command, replica 1 of bench data write to osd.x, replica 2 write to osd.y and replica 3 write to osd.z? If this is true, it means that I can not get benchmark of only one of my OSD in the cluster because the IOPS and throughput of 2 other for example slow OSDs will affect the result of tell bench command for my target OSD.Is that true? >> Thanks in advance. >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx