Possible bug report: kernel 6.5.0/6.5.1 high load when CIFS share is mounted (cifsd-cfid-laundromat in"D" state)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My apologies if I do not have the bug report protocol correct in posting here.

I've noticed an issue with the CIFS client in kernel 6.5.0/6.5.1 that
does not exist in 6.4.12 or other previous kernels (I have not tested
6.4.13). Almost immediately after mounting a CIFS share, the reported
load average on my system goes up by 2. At the time this occurs I see
two [cifsd-cfid-laundromat] kernel threads running the "D" state,
where they remain for the entire time the CIFS share is mounted. The
load will remain stable at 2 (otherwise idle) until the share is
unmounted, at which point the [cifsd-cfid-laundromat] threads
disappear and load drops back down to 0. This is easily reproducible
on my system, but I am not sure what to do to retrieve more useful
debugging information. If I mount two shares from this server, I get
four laundromat threads in "D" state and a sustained load average of
4.

The client is running Gentoo Linux, the server is a Seagate Personal
Cloud NAS running Samba 4.6.5. Mount options used are
"noperm,guest,vers=3.02". The CPUs do not actually appear to be
spinning, the reported load average appears incorrect as far as actual
CPU use is concerned.

I am happy to follow any instructions provided to gather more details
if I can help to track this down. Nothing that appears relevant
appears in syslog or dmesg output.

Thank you.



[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux