Re: bluefs _allocate unable to allocate on bdev 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is the end of a manual compaction and can't start actually even after compaction:
Meta: https://gist.github.com/Badb0yBadb0y/f918b1e4f2d5966cefaf96d879c52a6e
Log: https://gist.github.com/Badb0yBadb0y/054a0cefd4a56f0236b26479cc1a5290
________________________________
From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Sent: Thursday, September 12, 2024 6:34 AM
To: Ceph Users <ceph-users@xxxxxxx>
Subject:  bluefs _allocate unable to allocate on bdev 2

Hi,

Since yesterday on ceph octopus it started to crash multiple osds in the cluster and I can see this error in most of the logs:

2024-09-12T06:13:35.805+0700 7f98b8b27700  1 bluefs _allocate failed to allocate 0xf0732 on bdev 1, free 0x40000; fallback to bdev 2
2024-09-12T06:13:35.805+0700 7f98b8b27700  1 bluefs _allocate unable to allocate 0xf0732 on bdev 2, free 0xffffffffffffffff; fallback to slow device expander

The osds are 57% full so I think space issue shouldn't happen.

I'm using ssds so I don't have separated wal/rocksdb.

Running some compaction now but I don't think it will happen on a long run.
What could be this issue and how to fix?

Ty

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux