On Tue, Sep 15, 2015 at 11:53:55AM -0400, Tejun Heo wrote: > Hello, Johannes. > > On Tue, Sep 15, 2015 at 09:47:24AM +0200, Johannes Weiner wrote: > > Why can't we simply fail NOWAIT allocations when the high limit is > > breached? We do the same for the max limit. > > Because that can lead to continued systematic failures of NOWAIT > allocations. For that to work, we'll have to add async reclaimaing. > > > As I see it, NOWAIT allocations are speculative attempts on available > > memory. We should be able to just fail them and have somebody that is > > allowed to reclaim try again, just like with the max limit. > > Yes, but the assumption is that even back-to-back NOWAIT allocations > won't continue to fail indefinitely. But they have been failing indefinitely forever once you hit the hard limit in the past. There was never an async reclaim provision there. I can definitely see that the unconstrained high limit breaching needs to be fixed one way or another, I just don't quite understand why you chose to go for new semantics. Is there a new or a specific usecase you had in mind when you chose deferred reclaim over simply failing? -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html