Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

We're now hitting this on CentOS 8.4.

The "setmaxosd" workaround fixed access to one of our clusters, but
isn't working for another, where we have gaps in the osd ids, e.g.

# ceph osd getmaxosd
max_osd = 553 in epoch 691642
# ceph osd tree | sort -n -k1 | tail
 541   ssd   0.87299                     osd.541        up  1.00000 1.00000
 543   ssd   0.87299                     osd.543        up  1.00000 1.00000
 548   ssd   0.87299                     osd.548        up  1.00000 1.00000
 552   ssd   0.87299                     osd.552        up  1.00000 1.00000

Is there another workaround for this?

Cheers, dan


On Mon, May 3, 2021 at 12:32 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
>
> On Mon, May 3, 2021 at 12:27 PM Magnus Harlander <magnus@xxxxxxxxx> wrote:
> >
> > Am 03.05.21 um 12:25 schrieb Ilya Dryomov:
> >
> > ceph osd setmaxosd 10
> >
> > Bingo! Mount works again.
> >
> > Veeeery strange things are going on here (-:
> >
> > Thanx a lot for now!! If I can help to track it down, please let me know.
>
> Good to know it helped!  I'll think about this some more and probably
> plan to patch the kernel client to be less stringent and not choke on
> this sort of misconfiguration.
>
> Thanks,
>
>                 Ilya
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux