Re: 3 node CEPH PVE hyper-converged cluster serious fragmentation and performance loss in matter of days.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/10/2022 6:10 PM, Sasa Glumac wrote:


> In this respect could you please try to switch bluestore and bluefs
> allocators to bitmap and run some smoke benchmarking again.
Can i change this on live server (is there possibility of losing data etc )? Can you please share correct procedure.

To change the allocator for an OSD.N one should run:

ceph config set osd.N bluestore_allocator bitmap

and restart an OSD.

I'm unware about any issues with such a switch...

Alternatively/additionally you might want to try stupid allocator as well.


> Additionally you might want to upgrade to 15.2.16 which includes a bunch
> of improvements for Avl/Hybrid allocators tail latency numbers as per
> the ticket above.
Atm we use pve repository where 15.2.15 is latest , I will need to either wait for .16 from them or create second cluster without proxmox but would like to test on existing. Is there any difference between pve ceph and regular so i can change repo and install over existing ?
Sorry I don't know.

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux