On Fri, Jan 19 2018 at 10:48am -0500, Jens Axboe <axboe@xxxxxxxxx> wrote: > On 1/19/18 8:40 AM, Ming Lei wrote: > >>>> Where does the dm STS_RESOURCE error usually come from - what's exact > >>>> resource are we running out of? > >>> > >>> It is from blk_get_request(underlying queue), see > >>> multipath_clone_and_map(). > >> > >> That's what I thought. So for a low queue depth underlying queue, it's > >> quite possible that this situation can happen. Two potential solutions > >> I see: > >> > >> 1) As described earlier in this thread, having a mechanism for being > >> notified when the scarce resource becomes available. It would not > >> be hard to tap into the existing sbitmap wait queue for that. > >> > >> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource > >> allocation. I haven't read the dm code to know if this is a > >> possibility or not. Right, #2 is _not_ the way forward. Historically request-based DM used its own mempool for requests, this was to be able to have some measure of control and resiliency in the face of low memory conditions that might be affecting the broader system. Then Christoph switched over to adding per-request-data; which ushered in the use of blk_get_request using ATOMIC allocations. I like the result of that line of development. But taking the next step of setting BLK_MQ_F_BLOCKING is highly unfortunate (especially in that this dm-mpath.c code is common to old .request_fn and blk-mq, at least the call to blk_get_request is). Ultimately dm-mpath like to avoid blocking for a request because for this dm-mpath device we have multiple queues to allocate from if need be (provided we have an active-active storage network topology). > >> I'd probably prefer #1. It's a classic case of trying to get the > >> request, and if it fails, add ourselves to the sbitmap tag wait > >> queue head, retry, and bail if that also fails. Connecting the > >> scarce resource and the consumer is the only way to really fix > >> this, without bogus arbitrary delays. > > > > Right, as I have replied to Bart, using mod_delayed_work_on() with > > returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce > > resource should fix this issue. > > It'll fix the forever stall, but it won't really fix it, as we'll slow > down the dm device by some random amount. Agreed. > A simple test case would be to have a null_blk device with a queue depth > of one, and dm on top of that. Start a fio job that runs two jobs: one > that does IO to the underlying device, and one that does IO to the dm > device. If the job on the dm device runs substantially slower than the > one to the underlying device, then the problem isn't really fixed. Not sure DM will allow the underlying device to be opened (due to master/slave ownership that is part of loading a DM table)? > That said, I'm fine with ensuring that we make forward progress always > first, and then we can come up with a proper solution to the issue. The > forward progress guarantee will be needed for the more rare failure > cases, like allocation failures. nvme needs that too, for instance, for > the discard range struct allocation. Yeap, I'd be OK with that too. We'd be better for revisted this and then have some time to develop the ultimate robust fix (#1, callback from above). Mike -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel