Re: 4.8.8 kernel trigger OOM killer repeatedly when I have lots of RAM that should be free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN <marc@xxxxxxxxxxx> wrote:
>
> I gave it a thought again, I think it is exactly the nasty situation you
> described.
> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
> bcache can't handle IO as quickly and has to hang until the SSD has been
> flushed to spinning rust drives.
> This actually is exactly the same as filling up the cache on a USB key
> and now you're waiting for slow writes to flash, is it not?

It does sound like you might hit exactly the same kind of situation, yes.

And the fact that you have dmcrypt running too just makes things pile
up more. All those IO's end up slowed down by the scheduling too.

Anyway, none of this seems new per se. I'm adding Kent and Jens to the
cc (Tejun already was), in the hope that maybe they have some idea how
to control the nasty worst-case behavior wrt workqueue lockup (it's
not really a "lockup", it looks like it's just hundreds of workqueues
all waiting for IO to complete and much too deep IO queues).

I think it's the traditional "throughput is much easier to measure and
improve" situation, where making queues big help some throughput
situation, but ends up causing chaos when things go south.

And I think your NMI watchdog then turns the "system is no longer
responsive" into an actual kernel panic.

> With your dirty ratio workaround, I was able to re-enable bcache and
> have it not fall over, but only barely. I recorded over a hundred
> workqueues in flight during the copy at some point (just not enough
> to actually kill the kernel this time).
>
> I've started a bcache followp on this here:
> http://marc.info/?l=linux-bcache&m=148052441423532&w=2
> http://marc.info/?l=linux-bcache&m=148052620524162&w=2
>
> A full traceback showing the pilup of requests is here:
> http://marc.info/?l=linux-bcache&m=147949497808483&w=2
>
> and there:
> http://pastebin.com/rJ5RKUVm
> (2 different ones but mostly the same result)

Tejun/Kent - any way to just limit the workqueue depth for bcache?
Because that really isn't helping, and things *will* time out and
cause those problems when you have hundreds of IO's queued on a disk
that likely as a write iops around ~100..

And I really wonder if we should do the "big hammer" approach to the
dirty limits on non-HIGHMEM machines too (approximate the
"vm_highmem_is_dirtyable" by just limiting global_dirtyable_memory()
to 1 GB).

That would make the default dirty limits be 100/200MB (for soft/hard
throttling), which really is much more reasonable than gigabytes and
gigabytes of dirty data.

Of course, no way do we do that during rc7..

                    Linus
 mm/page-writeback.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 439cc63ad903..26ecbdecb815 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -352,6 +352,10 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
 #endif
 }
 
+/* Limit dirtyable memory to 1GB */
+#define PAGES_IN_GB(x) ((x) << (30 - PAGE_SHIFT))
+#define MAX_DIRTYABLE_LOWMEM_PAGES PAGES_IN_GB(1)
+
 /**
  * global_dirtyable_memory - number of globally dirtyable pages
  *
@@ -373,8 +377,11 @@ static unsigned long global_dirtyable_memory(void)
 	x += global_node_page_state(NR_INACTIVE_FILE);
 	x += global_node_page_state(NR_ACTIVE_FILE);
 
-	if (!vm_highmem_is_dirtyable)
+	if (!vm_highmem_is_dirtyable) {
 		x -= highmem_dirtyable_memory(x);
+		if (x > MAX_DIRTYABLE_LOWMEM_PAGES)
+			x = MAX_DIRTYABLE_LOWMEM_PAGES;
+	}
 
 	return x + 1;	/* Ensure that we never return 0 */
 }

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]