Re: allocate_bluefs_freespace failed to allocate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have 10 nodes and I use; CephFS, RBD and RGW clients and all of my
clients are 14.2.16 Nautilus.
My clients, MONs, OSDs are on the same servers.
I have constant usage: 50-300MiB/s rd, 15-30k op/s rd --- 100-300MiB/s wr,
1-4 op/s wr.
With the allocator issue it's highly possible to get slow ops and OSD down
while upgrading.

I will start synthetic load while upgrading the test environment to be
sure.
Thanks for the tips btw.



Stefan Kooman <stefan@xxxxxx>, 10 Kas 2021 Çar, 19:36 tarihinde şunu yazdı:

> On 11/10/21 17:22, mhnx wrote:
> > Hello Igor. Thanks for the answer.
> >
> > There are so many changes to read and test for me but I will plan an
> > upgrade to Octopus when I'm available.
> >
> > Is there any problem upgrading from 14.2.16 ---> 15.2.15 ?
>
> I have upgraded a few test clusters from 14.2.16 to 15.2.15. I hit no
> issues. But those have barely any load. Some things you might want to
> check beforehand:
>
> Do any default settings change from 14.2.16 tot 15.2.15 (i.e.
> bluefs_buffered_io=true)
>
> Not sure what clients you have, but in order to be able to mitigate
> "AUTH_INSECURE_GLOBAL_ID_RECLAIM" you ideally want to have all your
> clients upgraded before you upgrade your Ceph cluster. Otherwise you
> will have to mute the warning you will get.
>
> We have the following set:
>
> advanced osd_fast_shutdown=false
>
> We noticed if that was set to true we would get slow ops when rebooting
> OSD storage nodes (mimic -> Nautilus). Some users have hit slow ops
> issues when upgrading from Nautilus -> Octopus. See this thread [1]
> (still unresolved).
>
> The following settings should work:
>
> osd_fast_shutdown = true
> osd_fast_shutdown_notify_mon = false
>
> But apparently not in all circumstances.
>
> YMMV.
>
> Gr. Stefan
>
> [1]:
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/J6CRWRBIVZKEVWOTC2MIA2A4E5PHJ6SE/
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux