Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

It is quite an older cluster, luminous 12.2.8.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Konstantin Shalygin <k0ste@xxxxxxxx> 
Sent: Friday, May 14, 2021 1:12 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: [Suspicious newsletter]  Re: bluefs_buffered_io turn to true

Hi,

This is not a normal, It's something different I think, like a crush changes on restart. This option will be enabled by default again in Nautilus next, so you can use it now with 14.2.19-20


k

Sent from my iPhone

> On 14 May 2021, at 08:21, Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> wrote:
> 
> Hi,
> 
> I had issue with the snaptrim after a hug amount of deleted data, it slows down the team operations due to the snaptrim and snaptrim_wait pgs.
> 
> I've changed couple of things:
> 
> debug_ms = 0/0 #default 0/5
> osd_snap_trim_priority = 1 # default 5 
> osd_pg_max_concurrent_snap_trims = 1 # default 2
> 
> But didn't help.
> 
> I've found this thread about buffered io and seems like it helped to them:
> https://forum.proxmox.com/threads/ceph-storage-all-pgs-snaptrim-every-
> night-slowing-down-vms.71573/
> 
> I don't use swap on the OSD nodes, so I gave a try on 1 osd node and it caused basically the complete node's pg-s are degraded. Is it normal? I hope it will not rebalance the complete node because I don't have space for that. I changed it back but still slowly decreasing, so not sure this settings is correct or not or this behavior is good or not?
> 
> 2021-05-14 12:18:11.447628 mon.2004 [WRN] Health check update: 
> 3353/91976715 objects misplaced (0.004%) (OBJECT_MISPLACED)
> 2021-05-14 12:18:11.447640 mon.2004 [WRN] Health check update: 
> Degraded data redundancy: 33078466/91976715 objects degraded 
> (35.964%), 254 pgs degraded, 253 pgs undersized (PG_DEGRADED)
> 
> Istvan Szabo
> Senior Infrastructure Engineer
> ---------------------------------------------------
> Agoda Services Co., Ltd.
> e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
> ---------------------------------------------------
> 
> 
> ________________________________
> This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux