Hi The commit fa247089de9936a46e290d4724cb5f0b845600f5 ("dm: requeue IO if mapping table not yet available") causes a regression. It can be reproduced with this script (Zdenek hit it in his testing): # dmsetup create --notable test # truncate -s 1MiB testdata # losetup /dev/loop0 testdata # dmsetup load test --table '0 2048 linear /dev/loop0 0' # dd if=/dev/zero of=/dev/dm-0 bs=16k count=1 conv=fdatasync When you run the script, there will be a workqueue process looping and consuming 100% CPU. When you suspend and resume the device "test", the loop ends. The reason for the bug is this - dm_submit_bio sees that map is NULL, so it offloads the bio using the function queue_io. queue_io adds the bio to md->deferred and kicks the workqueue with "queue_work(md->wq, &md->work);" dm_wq_work sees that test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags) is false, so it pops the bio from md->deferred and submits it with submit_bio_noacct. submit_bio_noacct goes back to dm_submit_bio and we have a loop. The commit fa247089de9936a46e290d4724cb5f0b845600f5 says that it fixes some race condition, however I don't quite know what the race condition is. Did the race condition really happen in your testing? What configuration did you use when you hit this race? Or do you just think that this is racy without hitting it? Note that when you load a dm table for the first time, the nodes /dev/dm-0 and /sys/block/dm-0 are created. When you suspend and resume the device, the node /dev/mapper/test is created. When udev does its work, the nodes in /dev/disk/*/ are created. In order to hit this race condition, you must have some program that opens /dev/dm-0 early, without synchronizing with lvm or udev. Do you have a program that does it? Also note, that when /dev/dm-0 is created, it has size 0, so that all reads and writes to it are automatically rejected by the upper layers. The only accepted bio is flush, which causes this livelock. So (from userspace) you can't do anything meaningful with the device at this point. Mikulas