Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Improving large folio writeback performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

On Tue 14-01-25 16:50:53, Joanne Koong wrote:
> I would like to propose a discussion topic about improving large folio
> writeback performance. As more filesystems adopt large folios, it
> becomes increasingly important that writeback is made to be as
> performant as possible. There are two areas I'd like to discuss:
> 
> == Granularity of dirty pages writeback ==
> Currently, the granularity of writeback is at the folio level. If one
> byte in a folio is dirty, the entire folio will be written back. This
> becomes unscalable for larger folios and significantly degrades
> performance, especially for workloads that employ random writes.
> 
> One idea is to track dirty pages at a smaller granularity using a
> 64-bit bitmap stored inside the folio struct where each bit tracks a
> smaller chunk of pages (eg for 2 MB folios, each bit would track 32k
> pages), and only write back dirty chunks rather than the entire folio.

Yes, this is known problem and as Dave pointed out, currently it is upto
the lower layer to handle finer grained dirtiness handling. You can take
inspiration in the iomap layer that already does this, or you can convert
your filesystem to use iomap (preferred way).

> == Balancing dirty pages ==
> It was observed that the dirty page balancing logic used in
> balance_dirty_pages() fails to scale for large folios [1]. For
> example, fuse saw around a 125% drop in throughput for writes when
> using large folios vs small folios on 1MB block sizes, which was
> attributed to scheduled io waits in the dirty page balancing logic. In
> generic_perform_write(), dirty pages are balanced after every write to
> the page cache by the filesystem. With large folios, each write
> dirties a larger number of pages which can grossly exceed the
> ratelimit, whereas with small folios each write is one page and so
> pages are balanced more incrementally and adheres more closely to the
> ratelimit. In order to accomodate large folios, likely the logic in
> balancing dirty pages needs to be reworked.

I think there are several separate issues here. One is that
folio_account_dirtied() will consider the whole folio as needing writeback
which is not necessarily the case (as e.g. iomap will writeback only dirty
blocks in it). This was OKish when pages were 4k and you were using 1k
blocks (which was uncommon configuration anyway, usually you had 4k block
size), it starts to hurt a lot with 2M folios so we might need to find a
way how to propagate the information about really dirty bits into writeback
accounting.

Another problem *may* be that fast increments to dirtied pages (as we dirty
512 pages at once instead of 16 we did in the past) cause over-reaction in
the dirtiness balancing logic and we throttle the task too much. The
heuristics there try to find the right amount of time to block a task so
that dirtying speed matches the writeback speed and it's plausible that
the large increments make this logic oscilate between two extremes leading
to suboptimal throughput. Also, since this was observed with FUSE, I belive
a significant factor is that FUSE enables "strictlimit" feature of the BDI
which makes dirty throttling more aggressive (generally the amount of
allowed dirty pages is lower). Anyway, these are mostly speculations from
my end. This needs more data to decide what exactly (if anything) needs
tweaking in the dirty throttling logic.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux