Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 11 May 2021, at 14:24, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> 
> No, as mentioned above max_osds being greater is not a problem per se.
> Having max_osds set to 10000 when you only have a few dozen is going to
> waste a lot of memory and network bandwidth, but if it is just slightly
> bigger it's not something to worry about.  Normally these "spare" slots
> are ignored, but in Magnus' case they looked rather weird and the kernel
> refused the osdmap.  See
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f1c6f2122fc780560f09735b6d1dbf39b44eb0f <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f1c6f2122fc780560f09735b6d1dbf39b44eb0f>
> 
> for details.
> 
>> What kernel's was affected?
> 
> 5.11 and 5.12, backports are on the way.
> 
>> 
>> For example, max_osds is 132, total_osds_in in 126, max osd number is 131 - is affected?
> 
> No, max_osds alone is not enough to trigger it.

Thanks for clarification!


k
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux