On Wed, 2019-04-24 at 09:09 -0700, Bart Van Assche wrote: > On Wed, 2019-04-24 at 08:49 -0700, James Bottomley wrote: > > On Wed, 2019-04-24 at 08:32 -0700, Bart Van Assche wrote: > > > Another concern is whether this change can cause a livelock. If > > > the system is running out of memory and the page cache submits a > > > write request with a scatterlist with more than two elements, if > > > the kmalloc() for the scatterlist fails, will that prevent the > > > page cache from making any progress with writeback? > > > > It's pool backed, as I said. Is the concern there isn't enough > > depth in the pools for a large write? > > That memory pool is used by multiple drivers. Most but not all > sg_alloc_table_chained() calls happen from inside .queue_rq() > implementations. One sg_alloc_table_chained() call occurs in the NFS > server code. I'm not sure whether it is guaranteed that an > sg_alloc_table_chained() will succeed sooner or later under low > memory conditions. Additionally, new sg_alloc_table_chained() could > be added in drivers any time. The number of users is irrelevant. All we need is sequential forward progress to guarantee freedom from memory allocation related live lock. Even if they make write progress one at a time (although the current pool depth seems to be 2, so they make progress at least two at a time), memory will be released by the write and reclaim will progress. The guarantee required is ability to send or have outstanding at least one write and also that that write will return eventually releasing memory back to the pool for another write to proceed. James