Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 3, 2021 at 12:00 PM Magnus Harlander <magnus@xxxxxxxxx> wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> max_osd 12
>
> I never had more then 10 osds on the two osd nodes of this cluster.
>
> I was running a 3 osd-node cluster earlier with more than 10
> osds, but the current cluster has been setup from scratch and
> I definitely don't remember having ever more than 10 osds!
> Very strange!
>
> I had to replace 2 disks because of DOA-Problems, but for that
> I removed 2 osds and created new ones.
>
> I used ceph-deploy do create new osds.
>
> To delete osd.8 I used:
>
> # take it out
> ceph osd out 8
>
> # wait for rebalancing to finish
>
> systemctl stop ceph-osd@8
>
> # wait for a healthy cluster
>
> ceph osd purge 8 --yes-i-really-mean-it
>
> # edit ceph.conf and remove osd.8
>
> ceph-deploy --overwrie-conf admin s0 s1
>
> # Add the new disk and:
> ceph-deploy osd create --data /dev/sdc s0
> ...
>
> it get's created with the next free osd num (8) because purge releases 8 for reuse

It would be nice to track it down, but for the immediate issue of
kernel 5.11 not working, "ceph osd setmaxosd 10" should fix it.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux