Re: allocate_bluefs_freespace failed to allocate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello again.

It's hard to upgrade while having this problem because I have high I/O
usage and 1/30 OSD's are flapping almost everyday. I'm afraid of
having the OSD fail during the upgrade.
I need a temporary solution because I'm sure that while upgrading the
system at least one of the OSD's will fail and after that other OSD's
will follow like a chain.

To upgrade I need to compile the code because I'm using custom made
ArchLinux (5.4.85-1-lts) and without testing I can not upgrade the
production.
I've compiled and tests in progress now but in my test lab there is no
similar situation and synthetic load can not produce the same effect.

Is there any other solution to cover the issue for safe upgrade?
Is there any problem with switching hybrid allocator to bitmap
allocator just for upgrade?
Do I need to re-create OSD's? Or just stop and start with the bitmap
allocator and after the upgrade switch it hybrid?

I'm thinking of upgrading to the latest nautilus to be safe. One
problem at a time.
After that I will prepare Octopus packages for the future.


Konstantin Shalygin <k0ste@xxxxxxxx>, 11 Kas 2021 Per, 13:20 tarihinde
şunu yazdı:

>
> Hi,
> Just try to upgrade to last Nautilus
> Many things with allocator and collections was fixed on last nau releases
>
>
> k
>
> On 11 Nov 2021, at 13:15, mhnx <morphinwithyou@xxxxxxxxx> wrote:
>
> I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my
> clients are 14.2.16 Nautilus.
> My clients, MONs, OSDs are on the same servers.
> I have constant usage: 50-300MiB/s rd, 15-30k op/s rd --- 100-300MiB/s wr,
> 1-4 op/s wr.
> With the allocator issue it's highly possible to get slow ops and OSD down
> while upgrading.
>
> I will start synthetic load while upgrading the test environment to be
> sure.
> Thanks for the tips btw.
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux