Re: [PATCH 3/5] writeback: Fix bandwidth estimate for spiky workload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 8 Jul 2021 18:43:01 +0200 Jan Kara wrote:
>On Thu 08-07-21 20:17:51, Hillf Danton wrote:
>> On Wed, 7 Jul 2021 11:51:38 +0200 Jan Kara wrote:
>> >On Wed 07-07-21 15:40:17, Hillf Danton wrote:
>> >> On Mon,  5 Jul 2021 18:23:17 +0200 Jan Kara wrote:
>> >> >
>> >> >Michael Stapelberg has reported that for workload with short big spikes
>> >> >of writes (GCC linker seem to trigger this frequently) the write
>> >> >throughput is heavily underestimated and tends to steadily sink until it
>> >> >reaches zero. This has rather bad impact on writeback throttling
>> >> >(causing stalls). The problem is that writeback throughput estimate gets
>> >> >updated at most once per 200 ms. One update happens early after we
>> >> >submit pages for writeback (at that point writeout of only small
>> >> >fraction of pages is completed and thus observed throughput is tiny).
>> >> >Next update happens only during the next write spike (updates happen
>> >> >only from inode writeback and dirty throttling code) and if that is
>> >> >more than 1s after previous spike, we decide system was idle and just
>> >> >ignore whatever was written until this moment.
>> >> >
>> >> >Fix the problem by making sure writeback throughput estimate is also
>> >> >updated shortly after writeback completes to get reasonable estimate of
>> >> >throughput for spiky workloads.
>> >> >
>> >> >Link: https://lore.kernel.org/lkml/20210617095309.3542373-1-stapelberg+li>nux@xxxxxxxxxx
>> >> >Reported-by: Michael Stapelberg <stapelberg+linux@xxxxxxxxxx>
>> >> >Signed-off-by: Jan Kara <jack@xxxxxxx>
>> >...
>> >> >diff --git a/mm/page-writeback.c b/mm/page-writeback.c
>> >> >index 1fecf8ebadb0..6a99ddca95c0 100644
>> >> >--- a/mm/page-writeback.c
>> >> >+++ b/mm/page-writeback.c
>> >> >@@ -1346,14 +1346,7 @@ static void __wb_update_bandwidth(struct dirty_thr>ottle_control *gdtc,
>> >> > 	unsigned long dirtied;
>> >> > 	unsigned long written;
>> >> >
>> >> >-	lockdep_assert_held(&wb->list_lock);
>> >> >-
>> >> >-	/*
>> >> >-	 * rate-limit, only update once every 200ms.
>> >> >-	 */
>> >> >-	if (elapsed < BANDWIDTH_INTERVAL)
>> >> >-		return;
>> >> 
>> >> Please leave it as it is if you are not dumping the 200ms rule.
>> >
>> >Well, that could break the delayed updated scheduled after the end of
>> >writeback and for no good reason. The problematic ordering is like:
>> 
>> After another look at 2/5, you are cutting the rule, which is worth a
>> seperate patch.
>
>The only update that can break the 200ms rule are the updates added in this
>patch. I don't think separating the removal of 200ms check for that one
>case really brings much clarity. It would rather bring "what if questions"
>to this patch...
>
>> >end writeback on inode1
>> >  queue_delayed_work() - queues delayed work after BANDWIDTH_INTERVAL
>> >
>> >__wb_update_bandwidth() called e.g. from balance_dirty_pages()
>> >  wb->bw_time_stamp = now;
>> >
>> >end writeback on inode2
>> >  queue_delayed_work() - does nothing since work is already queued
>> >
>> >delayed work calls __wb_update_bandwidth() - nothing is done since elapsed
>> >< BANDWIDTH_INTERVAL and we may thus miss reflecting writeback of inode2 in
>> >our estimates.
>> 
>> Your example says the estimate based on inode2 is torpedoed by a random
>> update, and you are looking to make that estimate meaningful at the cost
>> of breaking the rule - how differet is it to the current one if the
>> estimate is derived from 20ms-elapsed interval at inode2? Is it likely to
>> see another palpablely different result at inode3 from 50ms-elapsed interval?
>
>I'm not sure I understand your question correctly but updates after shorter
>than 200ms interval should not disturb the estimates much.
>wb_update_write_bandwidth() effectively uses formula:
>
>	bandwidth = (written + bandwidth * (period - elapsed)) / period
>
>where 'period' is 3 seconds. So we compute average bandwidth over last 3
>seconds where amount written in 'elapsed' interval is 'written' pages. If
>'elapsed' is small, the influence of current sample on reducing estimated
>bandwidth is going to be small as well.

Correct. Without the 3s period that in combination with filters there helps
prevent any sample either in 20ms or 200ms interval from overturning the
estimated bandwidth, we will find what is estimated in an hour is a mile from
what disk vendors claim. And in a day interval I am inclined to ignore the
difference between the estimated BW and the bare metal BW, in assumption that
no disk will be found idle while dirty pages go above the background threshold.

Hillf




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux