On Thu, Dec 13, 2012 at 08:19:13PM +0000, Joe Thornber wrote: > diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c > index 504f3d6..8e47f44 100644 > --- a/drivers/md/dm-thin.c > +++ b/drivers/md/dm-thin.c > @@ -222,10 +222,28 @@ struct thin_c { > > struct pool *pool; > struct dm_thin_device *td; > + > + /* > + * The cell structures are too big to put on the stack, so we have > + * a couple here for use by the main mapping function. > + */ > + spinlock_t lock; > + struct dm_bio_prison_cell cell1, cell2; We're also trying to cut down on locking on these code paths. (High i/o load, many many cores?) Have you hit any problems while testing due to the stack size? The cells don't seem ridiculously big - could we perhaps just put them on the stack for now? If we do hit stack size problems in real world configurations, then we can try to compare the locking approach with an approach that uses a separate (local) mempool for each cell (or a mempool with double-sized elements). > - if (bio_detain(tc->pool, &key, bio, &cell1)) > + if (dm_bio_detain(tc->pool->prison, &key, bio, &tc->cell1, &cell_result)) { This deals with the existing upstream mempool deadlock, but there are still some other calls to bio_detain() remaining in the file in other functions that take one cell from a mempool and, before returning it, may require a second cell from the same mempool, which could lead to a deadlock. Can they be fixed too? (Multiple mempools/larger mempool elements where there isn't such an easy on-stack fix? In the worst case we might later end up unable to avoid having to use the bio front_pad.) Alasdair -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel