On Wed, 23 Apr 2014 12:40:58 +1000 NeilBrown <neilb@xxxxxxx> wrote: > When a loop-back NFS mount is active and the backing device for the > NFS mount becomes congested, that can impose throttling delays on the > nfsd threads. > > These delays significantly reduce throughput and so the NFS mount > remains congested. > > This results in a live lock and the reduced throughput persists. > > This live lock has been found in testing with the 'wait_iff_congested' > call, and could possibly be caused by the 'congestion_wait' call. > > This livelock is similar to the deadlock which justified the > introduction of PF_LESS_THROTTLE, and the same flag can be used to > remove this livelock. > > To minimise the impact of the change, we still throttle nfsd when the > filesystem it is writing to is congested, but not when some separate > filesystem (e.g. the NFS filesystem) is congested. > > Signed-off-by: NeilBrown <neilb@xxxxxxx> > --- > mm/vmscan.c | 18 ++++++++++++++++-- > 1 file changed, 16 insertions(+), 2 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index a9c74b409681..e011a646de95 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1424,6 +1424,18 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list) > list_splice(&pages_to_free, page_list); > } > > +/* If a kernel thread (such as nfsd for loop-back mounts) services /* * If ... please > + * a backing device by writing to the page cache it sets PF_LESS_THROTTLE. > + * In that case we should only throttle if the backing device it is > + * writing to is congested. In other cases it is safe to throttle. > + */ > +static int current_may_throttle(void) > +{ > + return !(current->flags & PF_LESS_THROTTLE) || > + current->backing_dev_info == NULL || > + bdi_write_congested(current->backing_dev_info); > +} > + > /* > * shrink_inactive_list() is a helper for shrink_zone(). It returns the number > * of reclaimed pages > @@ -1552,7 +1564,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > * implies that pages are cycling through the LRU faster than > * they are written so also forcibly stall. > */ > - if (nr_unqueued_dirty == nr_taken || nr_immediate) > + if ((nr_unqueued_dirty == nr_taken || nr_immediate) > + && current_may_throttle()) foo && bar please. As you did in in current_may_throttle(). > congestion_wait(BLK_RW_ASYNC, HZ/10); > } > > @@ -1561,7 +1574,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > * is congested. Allow kswapd to continue until it starts encountering > * unqueued dirty pages or cycling through the LRU too quickly. > */ > - if (!sc->hibernation_mode && !current_is_kswapd()) > + if (!sc->hibernation_mode && !current_is_kswapd() > + && current_may_throttle()) ditto > wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10); > > trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id, > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html