This is a note to let you know that I've just added the patch titled dm cache: fix access beyond end of origin device to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: dm-cache-fix-access-beyond-end-of-origin-device.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e893fba90c09f9b57fb97daae204ea9cc2c52fa5 Mon Sep 17 00:00:00 2001 From: Heinz Mauelshagen <heinzm@xxxxxxxxxx> Date: Wed, 12 Mar 2014 16:13:39 +0100 Subject: dm cache: fix access beyond end of origin device From: Heinz Mauelshagen <heinzm@xxxxxxxxxx> commit e893fba90c09f9b57fb97daae204ea9cc2c52fa5 upstream. In order to avoid wasting cache space a partial block at the end of the origin device is not cached. Unfortunately, the check for such a partial block at the end of the origin device was flawed. Fix accesses beyond the end of the origin device that occured due to attempted promotion of an undetected partial block by: - initializing the per bio data struct to allow cache_end_io to work properly - recognizing access to the partial block at the end of the origin device - avoiding out of bounds access to the discard bitset Otherwise, users can experience errors like the following: attempt to access beyond end of device dm-5: rw=0, want=20971520, limit=20971456 ... device-mapper: cache: promotion failed; couldn't copy block Signed-off-by: Heinz Mauelshagen <heinzm@xxxxxxxxxx> Acked-by: Joe Thornber <ejt@xxxxxxxxxx> Signed-off-by: Mike Snitzer <snitzer@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/md/dm-cache-target.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -2175,20 +2175,18 @@ static int cache_map(struct dm_target *t bool discarded_block; struct dm_bio_prison_cell *cell; struct policy_result lookup_result; - struct per_bio_data *pb; + struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); - if (from_oblock(block) > from_oblock(cache->origin_blocks)) { + if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { /* * This can only occur if the io goes to a partial block at * the end of the origin device. We don't cache these. * Just remap to the origin and carry on. */ - remap_to_origin_clear_discard(cache, bio, block); + remap_to_origin(cache, bio); return DM_MAPIO_REMAPPED; } - pb = init_per_bio_data(bio, pb_data_size); - if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { defer_bio(cache, bio); return DM_MAPIO_SUBMITTED; Patches currently in stable-queue which might be from heinzm@xxxxxxxxxx are queue-3.10/dm-cache-fix-access-beyond-end-of-origin-device.patch queue-3.10/dm-cache-fix-truncation-bug-when-copying-a-block-to-from-2tb-fast-device.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html