Re: kworker consumes 100% CPU on degraded RAID6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/10/2023 10.06, fedora@xxxxxxxxxxxxxx wrote:
On 28/10/2023 09.38, Jeffrey Walton wrote:
On Fri, Oct 27, 2023 at 5:59 PM Eyal Lebedinsky <fedora@xxxxxxxxxxxxxx> wrote:

Fully updated F28.

I had to send one (of 7) member disk for RMA.
I notice that the system is very non responsive. 'top' shows

      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
1365697 root      20   0       0      0      0 R  93.8   0.0 384:40.55 kworker/u16:3+flush-9:127

This continues even when there are no user actions (ff, tb closed).

A few days ago it stopped, but today I see that it kept running all night where there were
period of inactivity for a few hours.

As another point: a few days ago I received a disk from RMA and the recovery went as fast as expected.
I then removed another disk to send for RMA.

Is this expected? Is there anything I can do to improve the situation?

If you have a hot spare and a lot of data, I could envision a
situation where a low priority thread takes several days to rebuild
the array. Or that has been my [limited] experience when failing over.
But it usually happens in the background, and does not affect
responsiveness too much.

Jeff

I do not think this is the situation. I do not have a spare.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 sdg1[5] sdf1[4] sdh1[6] sdc1[9] sde1[7] sdd1[8]
       58593761280 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [_UUUUUU]
       bitmap: 84/88 pages [336KB], 65536KB chunk

unused devices: <none>

To show how slow it is, looking at iostat I see the writing is going at below 100KB/s.
I decided to pause (virsh save) a VM, which needs to write about 8GB.
It is now going for over 30m and completed about 3/4 of the job...

oops, I misread it. It did not complete 6GB, only 600MB (now up to 900MB).
I will let it complete through the day.

--
Eyal at Home (fedora@xxxxxxxxxxxxxx)
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux