Ceph orch made block_db too small, not accounting for multiple nvmes, how-to fix it?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

Our hosts have 3 NVMEs and 48 spinning drives each.
We found that ceph orch made the default lvm size for the block_db 1/3 the
total size of the NVMEs.
I suspect that ceph only considered one of the NVMEs when determining the
size, based the closely related issue; https://tracker.ceph.com/issues/54541

We started having some bluefs spillover events now, so I'm looking for a
way to fix this.

Best idea I have so far is to manually specify the "block_db_size" in the
osd_spec, then just recreating the entire block_db. Though I'm not sure if
that means we'll hit the same issue  https://tracker.ceph.com/issues/54541
instead.
There would also be a lot of data to move in order to do this to a total of
588 OSD's. Maybe there is a way to just maybe remove and re-add (bigger)
block_db?

I would appreciate any suggestions or tips.

Best regards, Mikael
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux