On Fri, Nov 09, 2018 at 07:06:10AM -0800, Christoph Hellwig wrote: > On Tue, Nov 06, 2018 at 09:23:11AM -0500, Brian Foster wrote: > > My > > understanding is that these discards can stack up and take enough time > > that a limit on outstanding discards is required, which now that I think > > of it makes me somewhat skeptical of the whole serial execution thing. > > Hitting that outstanding discard request limit is what bubbles up the > > stack and affects XFS by holding up log forces, since new discard > > submissions are presumably blocked on completion of the oldest > > outstanding request. > > We don't do strict ordering or request, but eventually requests > waiting for completion will block others from being submitted. > Ok, that's kind of what I expected. > > I'm not quite sure what happens in the block layer if that limit were > > lifted. Perhaps it assumes throttling responsibility directly via > > queues/plugs? I'd guess that at minimum we'd end up blocking indirectly > > somewhere (via memory allocation pressure?) anyways, so ISTM that some > > kind of throttling is inevitable in this situation. What am I missing? > > We'll still block new allocations waiting for these blocks and > other bits. Or to put it another way - if your discard implementation > is slow (independent of synchronous or not) your are going to be in > a world of pain with online discard. That is what it's not default > to start with. Sure, it's not really the XFS bits I was asking about here. This is certainly not a high priority and not a common use case. We're working through some of the other issues in the other sub-thread. In particular, I'm wondering if we can provide broader improvements to the overall mechanism to reduce some of that pain. Brian