Re: cephfs: unable to mount share with 5.11 mainline, ceph 15.2.9, MDS 14.1.16

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 3, 2021 at 11:15 AM Stefan Kooman <stefan@xxxxxx> wrote:
>
> On 3/2/21 6:00 PM, Jeff Layton wrote:
>
> >>
> >>>
> >>> v2 support in the kernel is keyed on the ms_mode= mount option, so that
> >>> has to be passed in if you're connecting to a v2 port. Until the mount
> >>> helpers get support for that option you'll need to specify the address
> >>> and port manually if you want to use v2.
> >>
> >> I've tried feeding it ms_mode=v2 but I get a "mount error 22 = Invalid
> >> argument", the ms_mode=legacy does work, but fails with the same errors.
> >>
> >
> > That needs different values. See:
> >
> >      https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=00498b994113a871a556f7ff24a4cf8a00611700
> >
> > You can try passing in a specific mon address and port, like:
> >
> >      192.168.21.22:3300:/cephfs/dir/
> >
> > ...and then pass in ms_mode=crc or something similar.
> >
> > That said, what you're doing should be working, so this sounds like a
> > regression. I presume you're able to mount with earlier kernels? What's
> > the latest kernel version that you have that works?
>
> 5.11 kernel (5.11.2-arch1-1 #1 SMP PREEMPT Fri, 26 Feb 2021 18:26:41
> +0000 x86_64 GNU/Linux) with a cluster that has ms_bind_ipv4=false
> works. Port 3300 ms_mode=prefer-crc and ms_mode=crc work.
>
> I have tested with 5.11 kernel (5.11.2-arch1-1 #1 SMP PREEMPT Fri, 26
> Feb 2021 18:26:41 +0000 x86_64 GNU/Linux) port 3300 and ms_mode=crc as
> well as ms_mode=prefer-crc and that works when cluster is running with
> ms_bind_ipv4=false. So the "fix" is to have this config option set: ceph
> config set global ms_bind_ipv4 false

Right.  According to your original post that was already the case:
"ms_bind_ipv6=trie, ms_bind_ipv4=false".

>
> 5.10 kernel (5.10.19-1-lts Arch Linux) works with a cluster that is IPv6
> only but has ms_bind_ipv4=true. So it's "broken" since 5.11.
>
> So, we have done more reading / researching on the ms_bind_ip{4,6} options:
>
> -
> https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus#Restart_the_OSD_daemon_on_all_nodes
>
> - https://github.com/rook/rook/issues/6266
>
> ^^ Describe that you have to disable bind to IPv4.
>
> - https://github.com/ceph/ceph/pull/13317
>
> ^^ this PR is not completely correct:
>
>     **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
>     you must set ``ms bind ipv6`` to ``true``.
>
> ^^ That is not enough as we have learned, and starts to give trouble
> with 5.11 linux cephfs client.
>
> And from this documentation:
> https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#ipv4-ipv6-dual-stack-mode
> we learned that dual stack is not possible for any current stable
> release, but might be possible with latest code. So the takeaway is that
> the linux kernel client needs fixing to be able to support dual stack
> clusters in the future (multiple v1 / v2 address families), and, that
> until then you should run with ms_bind_ipv4=false for IPv6 only clusters.

I don't think we do any dual stack testing, whether in userspace or
(certainly!) with the kernel client.

>
> I'll make a PR to clear up the documenation. Do you want me to create a
> tracker for the kernel client? I will happily test your changes.

Sure.  You are correct that the kernel client needs a bit a work as we
haven't considered dual stack configurations there at all.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux