Re: [RFC PATCH 2/2] mm, mempool: do not throttle PF_LESS_THROTTLE tasks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 18 2016, Michal Hocko wrote:

> From: Michal Hocko <mhocko@xxxxxxxx>
>
> Mikulas has reported that a swap backed by dm-crypt doesn't work
> properly because the swapout cannot make a sufficient forward progress
> as the writeout path depends on dm_crypt worker which has to allocate
> memory to perform the encryption. In order to guarantee a forward
> progress it relies on the mempool allocator. mempool_alloc(), however,
> prefers to use the underlying (usually page) allocator before it grabs
> objects from the pool. Such an allocation can dive into the memory
> reclaim and consequently to throttle_vm_writeout.

That's just broken.
I used to think mempool should always use the pre-allocated reserves
first.  That is surely the most logical course of action.  Otherwise
that memory is just sitting there doing nothing useful.

I spoke to Nick Piggin about this some years ago and he pointed out that
the kmalloc allocation paths are much better optimized for low overhead
when there is plenty of memory.  They can just pluck a free block of a
per-CPU list without taking any locks.   By contrast, accessing the
preallocated pool always requires a spinlock.

So it makes lots of sense to prefer the underlying allocator if it can
provide a quick response.  If it cannot, the sensible thing is to use
the pool, or wait for the pool to be replenished.

So the allocator should never wait at all, never enter reclaim, never
throttle.

Looking at the current code, __GFP_DIRECT_RECLAIM is disabled the first
time through, but if the pool is empty, direct-reclaim is allowed on the
next attempt.  Presumably this is where the throttling comes in ??  I
suspect that it really shouldn't do that. It should leave kswapd to do
reclaim (so __GFP_KSWAPD_RECLAIM is appropriate) and only wait in
mempool_alloc where pool->wait can wake it up.

If I'm following the code properly, the stack trace below can only
happen if the first pool->alloc() attempt, with direct-reclaim disabled,
fails and the pool is empty, so mempool_alloc() calls prepare_to_wait()
and io_schedule_timeout().
I suspect the timeout *doesn't* fire (5 seconds is along time) so it
gets woken up when there is something in the pool.  It then loops around
and tries pool->alloc() again, even though there is something in the
pool.  This might be justified if that ->alloc would never block, but
obviously it does.

I would very strongly recommend just changing mempool_alloc() to
permanently mask out __GFP_DIRECT_RECLAIM.

Quite separately I don't think PF_LESS_THROTTLE is at all appropriate.
It is "LESS" throttle, not "NO" throttle, but you have made
throttle_vm_writeout never throttle PF_LESS_THROTTLE threads.
The purpose of that flag is to allow a thread to dirty a page-cache page
as part of cleaning another page-cache page.
So it makes sense for loop and sometimes for nfsd.  It would make sense
for dm-crypt if it was putting the encrypted version in the page cache.
But if dm-crypt is just allocating a transient page (which I think it
is), then a mempool should be sufficient (and we should make sure it is
sufficient) and access to an extra 10% (or whatever) of the page cache
isn't justified.

Thanks,
NeilBrown



 If there are too many
> dirty or pages under writeback it will get throttled even though it is
> in fact a flusher to clear pending pages.
>
> [  345.352536] kworker/u4:0    D ffff88003df7f438 10488     6      2 0x00000000
> [  345.352536] Workqueue: kcryptd kcryptd_crypt [dm_crypt]
> [  345.352536]  ffff88003df7f438 ffff88003e5d0380 ffff88003e5d0380 ffff88003e5d8e80
> [  345.352536]  ffff88003dfb3240 ffff88003df73240 ffff88003df80000 ffff88003df7f470
> [  345.352536]  ffff88003e5d0380 ffff88003e5d0380 ffff88003df7f828 ffff88003df7f450
> [  345.352536] Call Trace:
> [  345.352536]  [<ffffffff818d466c>] schedule+0x3c/0x90
> [  345.352536]  [<ffffffff818d96a8>] schedule_timeout+0x1d8/0x360
> [  345.352536]  [<ffffffff81135e40>] ? detach_if_pending+0x1c0/0x1c0
> [  345.352536]  [<ffffffff811407c3>] ? ktime_get+0xb3/0x150
> [  345.352536]  [<ffffffff811958cf>] ? __delayacct_blkio_start+0x1f/0x30
> [  345.352536]  [<ffffffff818d39e4>] io_schedule_timeout+0xa4/0x110
> [  345.352536]  [<ffffffff8121d886>] congestion_wait+0x86/0x1f0
> [  345.352536]  [<ffffffff810fdf40>] ? prepare_to_wait_event+0xf0/0xf0
> [  345.352536]  [<ffffffff812061d4>] throttle_vm_writeout+0x44/0xd0
> [  345.352536]  [<ffffffff81211533>] shrink_zone_memcg+0x613/0x720
> [  345.352536]  [<ffffffff81211720>] shrink_zone+0xe0/0x300
> [  345.352536]  [<ffffffff81211aed>] do_try_to_free_pages+0x1ad/0x450
> [  345.352536]  [<ffffffff81211e7f>] try_to_free_pages+0xef/0x300
> [  345.352536]  [<ffffffff811fef19>] __alloc_pages_nodemask+0x879/0x1210
> [  345.352536]  [<ffffffff810e8080>] ? sched_clock_cpu+0x90/0xc0
> [  345.352536]  [<ffffffff8125a8d1>] alloc_pages_current+0xa1/0x1f0
> [  345.352536]  [<ffffffff81265ef5>] ? new_slab+0x3f5/0x6a0
> [  345.352536]  [<ffffffff81265dd7>] new_slab+0x2d7/0x6a0
> [  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
> [  345.352536]  [<ffffffff812678cb>] ___slab_alloc+0x3fb/0x5c0
> [  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
> [  345.352536]  [<ffffffff810e7f87>] ? sched_clock_local+0x17/0x80
> [  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
> [  345.352536]  [<ffffffff81267ae1>] __slab_alloc+0x51/0x90
> [  345.352536]  [<ffffffff811f71bd>] ? mempool_alloc_slab+0x1d/0x30
> [  345.352536]  [<ffffffff81267d9b>] kmem_cache_alloc+0x27b/0x310
> [  345.352536]  [<ffffffff811f71bd>] mempool_alloc_slab+0x1d/0x30
> [  345.352536]  [<ffffffff811f6f11>] mempool_alloc+0x91/0x230
> [  345.352536]  [<ffffffff8141a02d>] bio_alloc_bioset+0xbd/0x260
> [  345.352536]  [<ffffffffc02f1a54>] kcryptd_crypt+0x114/0x3b0 [dm_crypt]

Attachment: signature.asc
Description: PGP signature

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux