Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 3, 2021 at 12:24 PM Magnus Harlander <magnus@xxxxxxxxx> wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> There is a 6th osd directory on both machines, but it's empty
>
> [root@s0 osd]# ll
> total 0
> drwxrwxrwt. 2 ceph ceph 200  2. Mai 16:31 ceph-1
> drwxrwxrwt. 2 ceph ceph 200  2. Mai 16:31 ceph-3
> drwxrwxrwt. 2 ceph ceph 200  2. Mai 16:31 ceph-4
> drwxrwxrwt. 2 ceph ceph 200  2. Mai 16:31 ceph-5
> drwxr-xr-x. 2 ceph ceph   6  3. Apr 19:50 ceph-8 <===
> drwxrwxrwt. 2 ceph ceph 200  2. Mai 16:31 ceph-9
> [root@s0 osd]# pwd
> /var/lib/ceph/osd
>
> [root@s1 osd]# ll
> total 0
> drwxrwxrwt  2 ceph ceph 200 May  2 15:39 ceph-0
> drwxr-xr-x. 2 ceph ceph   6 Mar 13 17:54 ceph-1 <===
> drwxrwxrwt  2 ceph ceph 200 May  2 15:39 ceph-2
> drwxrwxrwt  2 ceph ceph 200 May  2 15:39 ceph-6
> drwxrwxrwt  2 ceph ceph 200 May  2 15:39 ceph-7
> drwxrwxrwt  2 ceph ceph 200 May  2 15:39 ceph-8
> [root@s1 osd]# pwd
> /var/lib/ceph/osd
>
> The bogus directories are empty and they are
> used on the other machine for a real osd!
>
> How is that?
>
> Should I remove them and restart ceph.target?

I don't think empty directories matter at this point.  You may not have
had 12 OSDs at any point in time, but the max_osd value appears to have
gotten bumped when you were replacing those disks.

Note that max_osd being greater than the number of OSDs is not a big
problem by itself.  The osdmap is going to be larger and require more
memory but that's it.  You can test by setting it back to 12 and trying
to mount -- it should work.  The issue is specific to how to those OSDs
were replaced -- something went wrong and the osdmap somehow ended up
with rather bogus addrvec entries.  Not sure if it's ceph-deploy's
fault, something weird in ceph.conf (back then) or a an actual ceph
bug.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux