Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

> On 3 May 2021, at 14:15, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> 
> I don't think empty directories matter at this point.  You may not have
> had 12 OSDs at any point in time, but the max_osd value appears to have
> gotten bumped when you were replacing those disks.
> 
> Note that max_osd being greater than the number of OSDs is not a big
> problem by itself.  The osdmap is going to be larger and require more
> memory but that's it.  You can test by setting it back to 12 and trying
> to mount -- it should work.  The issue is specific to how to those OSDs
> were replaced -- something went wrong and the osdmap somehow ended up
> with rather bogus addrvec entries.  Not sure if it's ceph-deploy's
> fault, something weird in ceph.conf (back then) or a an actual ceph
> bug.

What actuality is bug? When max_osds > total_osd_in?
What kernel's was affected?

For example, max_osds is 132, total_osds_in in 126, max osd number is 131 - is affected?



Thanks,
k
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux