mca_cannibalize() is used to cannibalize a btree node cache in mca_alloc() when, - There is no available node from c->btree_cache_freeable list. - There is no available node from c->btree_cache_freed list. - mca_bucket_alloc() fails to allocate new in-memory node neither. Then mca_cannibalize() will try to shrink one node from c->btree_cache list and allocate it to new btree node in such cannibalized way. Now with patch "bcache: limit bcache btree node cache memory consumption by I/O throttle", the in-memory btree nodes can be shrunk from list c->btree_cache proactively already, in most of time there will be enough memory to allocate. So kzalloc() in mca_bucket_alloc() will always success, and such cannibalized allocation is almost useless. Considering the extra complication in mca_cannibalize_lock(), it is time to remove the unnecessary mca_cannibalize() from bcache code. NOTE: mca_cannibalize_lock() and mca_cannibalize_unlock() are still kept in bcache code, they are referenced by other btree related code yet. Signed-off-by: Coly Li <colyli@xxxxxxx> --- drivers/md/bcache/btree.c | 26 -------------------------- 1 file changed, 26 deletions(-) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index ada17113482f..48a097037da8 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -962,28 +962,6 @@ static int mca_cannibalize_lock(struct cache_set *c, struct btree_op *op) return 0; } -static struct btree *mca_cannibalize(struct cache_set *c, struct btree_op *op, - struct bkey *k) -{ - struct btree *b; - - trace_bcache_btree_cache_cannibalize(c); - - if (mca_cannibalize_lock(c, op)) - return ERR_PTR(-EINTR); - - list_for_each_entry_reverse(b, &c->btree_cache, list) - if (!mca_reap(b, btree_order(k), false)) - return b; - - list_for_each_entry_reverse(b, &c->btree_cache, list) - if (!mca_reap(b, btree_order(k), true)) - return b; - - WARN(1, "btree cache cannibalize failed\n"); - return ERR_PTR(-ENOMEM); -} - /* * We can only have one thread cannibalizing other cached btree nodes at a time, * or we'll deadlock. We use an open coded mutex to ensure that, which a @@ -1072,10 +1050,6 @@ static struct btree *mca_alloc(struct cache_set *c, struct btree_op *op, if (b) rw_unlock(true, b); - b = mca_cannibalize(c, op, k); - if (!IS_ERR(b)) - goto out; - return b; } -- 2.16.4