On 26.01.25 01:46, Matthew Wilcox wrote:
Postgres are experimenting with doing direct I/O to 1GB hugetlb pages. Andres has gathered some performance data showing significantly worse performance with 1GB pages compared to 2MB pages. I sent a patch recently which improves matters [1], but problems remain. The primary problem we've identified is contention of folio->_refcount with a strong secondary contention on folio->_pincount. This is coming from the call chain: iov_iter_extract_pages -> gup_fast_fallback -> try_grab_folio_fast Obviously we can fix this by sharding the counts. We could do that by address, since there's no observed performance problem with 2MB pages. But I think we'd do better to shard by CPU. We have percpu-refcount.h already, and I think it'll work. The key to percpu refcounts is knowing at what point you need to start caring about whether the refcount has hit zero (we don't care if the refcount oscillates between 1 and 2, but we very much care about when we hit 0). I think the point at which we call percpu_ref_kill() is when we remove a folio from the page cache. Before that point, the refcount is guaranteed to always be positive. After that point, once the refcount hits zero, we must free the folio. It's pretty rare to remove a hugetlb page from the page cache while it's still mapped. So we don't need to worry about scalability at that point. Any volunteers to prototype this? Andres is a delight to work with, but I just don't have time to take on this project right now.
Hmmm ... do we really want to make refcounting more complicated, and more importantly, hugetlb-refcounting more special ?! :)
If the workload doing a lot of single-page try_grab_folio_fast(), could it do so on a larger area (multiple pages at once -> single refcount update)?
Maybe there is a link to the report you could share, thanks. -- Cheers, David / dhildenb