On 3/3/20 11:12 AM, Greg Kroah-Hartman wrote: > On Tue, Mar 03, 2020 at 10:55:04AM -0700, Jens Axboe wrote: >> On 3/3/20 10:42 AM, Greg Kroah-Hartman wrote: >>> From: Coly Li <colyli@xxxxxxx> >>> >>> [ Upstream commit 0b96da639a4874311e9b5156405f69ef9fc3bef8 ] >>> >>> When run a cache set, all the bcache btree node of this cache set will >>> be checked by bch_btree_check(). If the bcache btree is very large, >>> iterating all the btree nodes will occupy too much system memory and >>> the bcache registering process might be selected and killed by system >>> OOM killer. kthread_run() will fail if current process has pending >>> signal, therefore the kthread creating in run_cache_set() for gc and >>> allocator kernel threads are very probably failed for a very large >>> bcache btree. >>> >>> Indeed such OOM is safe and the registering process will exit after >>> the registration done. Therefore this patch flushes pending signals >>> during the cache set start up, specificly in bch_cache_allocator_start() >>> and bch_gc_thread_start(), to make sure run_cache_set() won't fail for >>> large cahced data set. >> >> Please drop this one, it's being reverted in mainline. > > Dropped from all trees now, thanks. Thanks! -- Jens Axboe