On 1/24/25 23:48, Matthew Wilcox (Oracle) wrote:
Postgres sees significant contention on the hashed folio waitqueue lock
when performing direct I/O to 1GB hugetlb pages. This is because we
mark the destination pages as dirty, and the locks end up 512x more
contended with 1GB pages than with 2MB pages.
We can skip the locking if the folio is already marked as dirty.
The writeback path clears the dirty flag before commencing writeback,
if we see the dirty flag set, the data written to the folio will be
written back.
In one test, throughput increased from 18GB/s to 20GB/s and moved the
bottleneck elsewhere.
Reported-by: Andres Freund <andres@xxxxxxxxxxx>
Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
---
block/bio.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/bio.c b/block/bio.c
index f0c416e5931d..e8d18a0fecb5 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1404,6 +1404,8 @@ void bio_set_pages_dirty(struct bio *bio)
struct folio_iter fi;
bio_for_each_folio_all(fi, bio) {
+ if (folio_test_dirty(folio))
+ continue;
folio_lock(fi.folio);
folio_mark_dirty(fi.folio);
folio_unlock(fi.folio);
The same reasoning can probably applied to __bio_release_pages().
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@xxxxxxx +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich