Re: Does Replica Count Affect Tell Bench Result or Not?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Thanks. I am planning to change all of my disks. But do you know enterprise SSD Disk which is best in trade of between cost & iops performance?Which model and brand.Thanks in advance.
    On Wednesday, December 28, 2022 at 08:44:34 AM GMT+3:30, Konstantin Shalygin <k0ste@xxxxxxxx> wrote:  
 
 Hi,

The cache was gone, optimize is proceed. This is not enterprise device, you should never use it with Ceph 🙂


k
Sent from my iPhone

> On 27 Dec 2022, at 16:41, hosseinz8050@xxxxxxxxx wrote:
> 
>  Thanks AnthonyI have a cluster with QLC SSD disks (Samsung QVO 860). The cluster works for 2 year. Now all OSDs return 12 iops when running tell bench which is very slow. But I Buy new QVO disks yesterday, and I added this new disk to cluster. For the first 1 hour, I got 100 iops from this new OSD. But after 1 Hour, this new disk (OSD) returns to iops 12 again as the same as other OLD OSDs.I can not imagine what happening?!!
>    On Tuesday, December 27, 2022 at 12:18:07 AM GMT+3:30, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:  
> 
> My understanding is that when you ask an OSD to bench (via the admin socket), only that OSD executes, there is no replication.  Replication is a function of PGs.
> 
> Thus, this is a narrowly-focused tool with both unique advantages and disadvantages.
> 
> 
> 
>> On Dec 26, 2022, at 12:47 PM, hosseinz8050@xxxxxxxxx wrote:
>> 
>> Hi experts,I want to know, when I execute ceph tell osd.x bench command, is replica 3 considered in the bench or not? I mean, for example in case of replica 3, when I executing tell bench command, replica 1 of bench data write to osd.x, replica 2 write to osd.y and replica 3 write to osd.z? If this is true, it means that I can not get benchmark of only one of my OSD in the cluster because the IOPS and throughput of 2 other for example slow OSDs will affect the result of tell bench command for my target OSD.Is that true?
>> Thanks in advance.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux