Having issues to start more than 24 OSDs per host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

We did try to use Cephadm with Podman to start 44 OSDs per host which consistently stop after adding 24 OSDs per host.
We did look into the cephadm.log on the problematic host and saw that the command `cephadm ceph-volume lvm list --format json` did stuck.
We were the output of the command wasn't complete. Therefore, we tried to use compacted JSON and we could increase the number to 36 OSDs per host.

If you need more information just ask.


Podman version: 3.2.1
Ceph version: 16.2.4
OS version: Suse Leap 15.3

Greetings,
Jan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux