Re: About ceph disk slowops effect to cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, it's good for me, reduce recovery process from 4GB/s to 200MB/s

Vào Th 6, 12 thg 1, 2024 vào lúc 15:52 Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx> đã viết:

> Is it better?
>
> Istvan Szabo
> Staff Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx
> ---------------------------------------------------
>
>
> ------------------------------
> *From:* Phong Tran Thanh <tranphong079@xxxxxxxxx>
> *Sent:* Friday, January 12, 2024 3:32 PM
> *To:* David Yang <gmydw1118@xxxxxxxxx>
> *Cc:* ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> *Subject:*  Re: About ceph disk slowops effect to cluster
>
> Email received from the internet. If in doubt, don't click any link nor
> open any attachment !
> ________________________________
>
> I update the config
> osd_mclock_profile=custom
> osd_mclock_scheduler_background_recovery_lim=0.2
> osd_mclock_scheduler_background_recovery_res=0.2
> osd_mclock_scheduler_client_wgt=6
>
> Vào Th 6, 12 thg 1, 2024 vào lúc 15:31 Phong Tran Thanh <
> tranphong079@xxxxxxxxx> đã viết:
>
> > Hi Yang and Anthony,
> >
> > I found the solution for this problem on a HDD disk 7200rpm
> >
> > When the cluster recovers, one or multiple disk failures because slowop
> > appears and then affects the cluster, we can change these configurations
> > and may reduce IOPS when recovery.
> > osd_mclock_profile=custom
> > osd_mclock_scheduler_background_recovery_lim=0.2
> > osd_mclock_scheduler_background_recovery_res=0.2
> > osd_mclock_scheduler_client_wgt
> >
> >
> > Vào Th 4, 10 thg 1, 2024 vào lúc 11:22 David Yang <gmydw1118@xxxxxxxxx
> >
> > đã viết:
> >
> >> The 2*10Gbps shared network seems to be full (1.9GB/s).
> >> Is it possible to reduce part of the workload and wait for the cluster
> >> to return to a healthy state?
> >> Tip: Erasure coding needs to collect all data blocks when recovering
> >> data, so it takes up a lot of network card bandwidth and processor
> >> resources.
> >>
> >
> >
> > --
> > Trân trọng,
> >
> >
> ----------------------------------------------------------------------------
> >
> > *Tran Thanh Phong*
> >
> > Email: tranphong079@xxxxxxxxx
> > Skype: tranphong079
> >
>
>
> --
> Trân trọng,
>
> ----------------------------------------------------------------------------
>
> *Tran Thanh Phong*
>
> Email: tranphong079@xxxxxxxxx
> Skype: tranphong079
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ------------------------------
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
>


-- 
Trân trọng,
----------------------------------------------------------------------------

*Tran Thanh Phong*

Email: tranphong079@xxxxxxxxx
Skype: tranphong079
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux