A few days ago, our production server started experiencing extremely high load averages - to the point of slowing all apps on the machine to a crawl. Rebooting the system helped, but only for a day or so it seems. This is a dual-Xeon 3.2 Ghz machine (with ES 2.1) that until now rarely saw load averages higher than 1.0. In doing some poking around, I'm finding that kswapd and krefilld seem to be the culprits. But I'm also seeing strange things that seem very abnormal (like apache consuming 30% CPU time or more). Here is a partial output from top, when krefilld is on top: 9:25pm up 1 day, 47 min, 1 user, load average: 30.60, 43.16, 38.12 280 processes: 278 sleeping, 2 running, 0 zombie, 0 stopped CPU0 states: 0.5% user, 9.3% system, 0.0% nice, 89.1% idle CPU1 states: 9.3% user, 2.3% system, 0.0% nice, 87.2% idle CPU2 states: 1.3% user, 1.4% system, 0.0% nice, 96.1% idle CPU3 states: 0.1% user, 2.4% system, 0.0% nice, 96.4% idle Mem: 2058820K av, 2053656K used, 5164K free, 36K shrd, 19852K buff Swap: 2096472K av, 2093416K used, 3056K free 102528K cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 12 root 16 0 0 0 0 SW 10.7 0.0 56:09 krefilld As you can see not only is krefilld running for an extremely long time, but it's consuming a lot of resources (no so much in this instance, but it occasionally peaks to over 90% cpu usage). I googled a bug just that manifested these exact symptoms in a previous kernel: https://bugzilla.redhat.com/bugzilla/long_list.cgi?buglist=117902 Supposedly, that bug has been fixed in all kernels after 2.4.9-e.49, but I'm running 2.4.9-e.62. As an added clue, the kernel ran fine for 70+ days until just a few days ago. This problem only recently started surfacing so I don't want to rule out external hanky-panky like a renegade web script or something of that nature. Any suggestions? -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list