hi Al Viro, I recently encountered a CMA page migration failure issue. This page has private, and private data is buffer_head struct pointer. buffer_head->b_count is not zero, so drop_buffers failed. This leads to the failure of both directly reclaim and migration attempts for this page. Finally, CMA memory alloc failed. This buffer_head detail info are as follows: crash> struct buffer_head 0xffffffec9f0200d0 -x struct buffer_head { b_state = 0x29, //has Uptodate, Req, Mapped flags b_this_page = 0xffffffec9f0200d0, b_page = 0xffffffbfb4bb0080, b_blocknr = 0x801b, b_size = 0x1000, b_data = 0xffffffed2ec02000 "\244\201", b_bdev = 0xffffffed169b2580, b_end_io = 0xffffff91006c44e4 <end_buffer_read_sync>, b_private = 0x0, b_assoc_buffers = { next = 0xffffffec9f020118, prev = 0xffffffec9f020118 }, b_assoc_map = 0x0, b_count = { counter = 0x1 } } The b_count is 1, just because it's in cpu6 bh_lru: crash> p bh_lrus:a | grep 0xffffffec9f0200d0 -B 1 per_cpu(bh_lrus, 6) = $7 = { bhs = {0xffffffed146867b8, 0xffffffec9f020548, 0xffffffed0f7e3138, 0xffffffed0f7e30d0, 0xffffffed0f6f7340, 0xffffffed0b8c59c0, 0xffffffeb7bdb7888, 0xffffffed0b8c5548, 0xffffffed0f7b7270, 0xffffffed0f7b7208, 0xffffffed0f7b7138, 0xffffffec9f0200d0,//this entry 0xffffffed0f7b7068, 0xffffffed0f7b7000, 0xffffffed0f7b7bc8, 0xffffffec9f020068} On my device using kernel-4.19, inactive bh may be in bh_lrus for a long time, and cause the corresponding page migration failure. In function buffer_busy, can we check if b_count is greater than zero just because it's in bh_lrus? If yes, can we evict some inactive bhs to improve the success rate of migration? Thank you very much!