Postgres sees significant contention on the hashed folio waitqueue lock when performing direct I/O to 1GB hugetlb pages. This is because we mark the destination pages as dirty, and the locks end up 512x more contended with 1GB pages than with 2MB pages. We can skip the locking if the folio is already marked as dirty. The writeback path clears the dirty flag before commencing writeback, if we see the dirty flag set, the data written to the folio will be written back. In one test, throughput increased from 18GB/s to 20GB/s and moved the bottleneck elsewhere. Reported-by: Andres Freund <andres@xxxxxxxxxxx> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index f0c416e5931d..e8d18a0fecb5 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1404,6 +1404,8 @@ void bio_set_pages_dirty(struct bio *bio) struct folio_iter fi; bio_for_each_folio_all(fi, bio) { + if (folio_test_dirty(folio)) + continue; folio_lock(fi.folio); folio_mark_dirty(fi.folio); folio_unlock(fi.folio); -- 2.45.2