[Adding Andres to the cc. Sorry for leaving you off in the initial mail] On Mon, Jan 27, 2025 at 03:09:23PM +0100, David Hildenbrand wrote: > On 26.01.25 01:46, Matthew Wilcox wrote: > > Postgres are experimenting with doing direct I/O to 1GB hugetlb pages. > > Andres has gathered some performance data showing significantly worse > > performance with 1GB pages compared to 2MB pages. I sent a patch > > recently which improves matters [1], but problems remain. > > > > The primary problem we've identified is contention of folio->_refcount > > with a strong secondary contention on folio->_pincount. This is coming > > from the call chain: > > > > iov_iter_extract_pages -> > > gup_fast_fallback -> > > try_grab_folio_fast > > > > Obviously we can fix this by sharding the counts. We could do that by > > address, since there's no observed performance problem with 2MB pages. > > But I think we'd do better to shard by CPU. We have percpu-refcount.h > > already, and I think it'll work. > > > > The key to percpu refcounts is knowing at what point you need to start > > caring about whether the refcount has hit zero (we don't care if the > > refcount oscillates between 1 and 2, but we very much care about when > > we hit 0). > > > > I think the point at which we call percpu_ref_kill() is when we remove a > > folio from the page cache. Before that point, the refcount is guaranteed > > to always be positive. After that point, once the refcount hits zero, > > we must free the folio. > > > > It's pretty rare to remove a hugetlb page from the page cache while it's > > still mapped. So we don't need to worry about scalability at that point. > > > > Any volunteers to prototype this? Andres is a delight to work with, > > but I just don't have time to take on this project right now. > > Hmmm ... do we really want to make refcounting more complicated, and more > importantly, hugetlb-refcounting more special ?! :) No, I really don't. But I've always been mildly concerned about extra contention on folio locks, folio refcounts, etc. I don't know if 2MB page performance might be improved by a scheme like this, and we might even want to cut over for sizes larger than, say, 64kB. That would be something interesting to investigate. > If the workload doing a lot of single-page try_grab_folio_fast(), could it > do so on a larger area (multiple pages at once -> single refcount update)? Not really. This is memory that's being used as the buffer cache, so every thread in your database is hammering on it and pulling in exactly the data that it needs for the SQL query that it's processing. > Maybe there is a link to the report you could share, thanks. Andres shared some gists, but I don't want to send those to a mailing list without permission. Here's the kernel part of the perf report: 14.04% postgres [kernel.kallsyms] [k] try_grab_folio_fast | --14.04%--try_grab_folio_fast gup_fast_fallback | --13.85%--iov_iter_extract_pages bio_iov_iter_get_pages iomap_dio_bio_iter __iomap_dio_rw iomap_dio_rw xfs_file_dio_read xfs_file_read_iter __io_read io_read io_issue_sqe io_submit_sqes __do_sys_io_uring_enter do_syscall_64 Now, since postgres is using io_uring, perhaps there could be a path which registers the memory with the iouring (doing the refcount/pincount dance once), and then use that pinned memory for each I/O. Maybe that already exists; I'm not keeping up with io_uring development and I can't seem to find any documentation on what things like io_provide_buffers() actually do.