Currently, block device pages don't provide a ->migratepage callback and thus fallback_migrate_page() is used for them. This handler cannot deal with dirty pages in async mode and also with the case a buffer head is in the LRU buffer head cache (as it has elevated b_count). Thus such page can block memory offlining. Fix the problem by using buffer_migrate_page_norefs() for migrating block device pages. That function takes care of dropping bh LRU in case migration would fail due to elevated buffer refcount to avoid stalls and can also migrate dirty pages without writing them. Signed-off-by: Jan Kara <jack@xxxxxxx> --- fs/block_dev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/block_dev.c b/fs/block_dev.c index a80b4f0ee7c4..de2135178e62 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -1966,6 +1966,7 @@ static const struct address_space_operations def_blk_aops = { .writepages = blkdev_writepages, .releasepage = blkdev_releasepage, .direct_IO = blkdev_direct_IO, + .migratepage = buffer_migrate_page_norefs, .is_dirty_writeback = buffer_check_dirty_writeback, }; -- 2.16.4