Re: CEPH zero iops after upgrade to Reef and manual read balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mosharaf,

Can you please create a tracker issue and attach a copy of your osdmap?
Also, please include any other output that characterizes the slowdown in
client I/O operations you're noticing in your cluster. I can take a look
once I have that information,

Thanks,
Laura

On Wed, Sep 13, 2023 at 5:23 AM Mosharaf Hossain <
mosharaf.hossain@xxxxxxxxxxxxxx> wrote:

> Hello Folks
> We've recently performed an upgrade on our Cephadm cluster, transitioning
> from Ceph Quiency to Reef. However, following the manual implementation of
> a read balancer in the Reef cluster, we've experienced a significant
> slowdown in client I/O operations within the Ceph cluster, affecting both
> client bandwidth and overall cluster performance.
>
> This slowdown has resulted in unresponsiveness across all virtual machines
> within the cluster, despite the fact that the cluster exclusively utilizes
> SSD storage."
>
> Kindly guide us to move forward.
>
>
>
> Regards
> Mosharaf Hossain
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage <https://ceph.io>

Chicago, IL

lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
M: +17087388804
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux