Re: Newer linux kernel cephfs clients is more trouble?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey y'all -

As a datapoint, I *don't* see this issue on 5.17.4-200.fc35.x86_64. Hosts are Fedora 35 server, with 17.2.0. Happy to test or provide more data from this cluster if it would be helpful.

-Alex
On May 11, 2022, 2:02 PM -0400, David Rivera <rivera.david87@xxxxxxxxx>, wrote:
> Hi,
>
> My experience is similar, I was also using elrepo kernels on CentOS 8.
> Kernels 5.14+ were causing problems, I had to go back to 5.11. I did not
> test 5.12-5.13. I did not have enough time to narrow down the system
> instability to Ceph. Currently, I'm using the included Rocky Linux 8
> kernels (4.18); I very rarely get caps release warnings but besides that
> everything has been working great.
>
> David
>
>
> On Wed, May 11, 2022, 09:07 Stefan Kooman <stefan@xxxxxx> wrote:
>
> > Hi List,
> >
> > We have quite a few linux kernel clients for CephFS. One of our
> > customers has been running mainline kernels (CentOS 7 elrepo) for the
> > past two years. They started out with 3.x kernels (default CentOS 7),
> > but upgraded to mainline when those kernels would frequently generate
> > MDS warnings like "failing to respond to capability release". That
> > worked fine until 5.14 kernel. 5.14 and up would use a lot of CPU and
> > *way* more bandwidth on CephFS than older kernels (order of magnitude).
> > After the MDS was upgraded from Nautilus to Octopus that behavior is
> > gone (comparable CPU / bandwidth usage as older kernels). However, the
> > newer kernels are now the ones that give "failing to respond to
> > capability release", and worse, clients get evicted (unresponsive as far
> > as the MDS is concerned). Even the latest 5.17 kernels have that. No
> > difference is observed between using messenger v1 or v2. MDS version is
> > 15.2.16.
> > Surprisingly the latest stable kernels from CentOS 7 work flawlessly
> > now. Although that is good news, newer operating systems come with newer
> > kernels.
> >
> > Does anyone else observe the same behavior with newish kernel clients?
> >
> > Gr. Stefan
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux