Re: Direct I/O performance problems with 1GB pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2025-01-27 20:20:41 +0100, David Hildenbrand wrote:
> On 27.01.25 18:25, Andres Freund wrote:
> > On 2025-01-27 15:09:23 +0100, David Hildenbrand wrote:
> > Unfortunately for the VMs with those disks I don't have access to hardware
> > performance counters :(.
> > >
> > > Maybe there is a link to the report you could share, thanks.
> > 
> > A profile of the "original" case where I hit this, without the patch that
> > Willy linked to:
> > 
> > Note this is a profile *not* using hardware perf counters, thus likely to be
> > rather skewed:
> > https://gist.github.com/anarazel/304aa6b81d05feb3f4990b467d02dabc
> > (this was on Debian Sid's 6.12.6)
> > 
> > Without the patch I achieved ~18GB/s with 1GB pages and ~35GB/s with 2MB
> > pages.
> 
> Out of interest, did you ever compare it to 4k?

I didn't. Postgres will always do at least 8kB (unless compiled with
non-default settings). But I also don't think I tested just doing 8kB on that
VM. I doubt I'd have gotten close to the max, even with 2MB huge pages. At
least not without block-layer-level merging of IOs.

If it's particularly interesting, I can bring a similar VM up and run that
comparison.



> > This time it's actual hardware perf counters...
> > 
> > Relevant details about the c2c report, excerpted from IRC:
> > 
> > andres | willy: Looking at a bit more detail into the c2c report, it looks
> >           like the dirtying is due to folio->_pincount and folio->_refcount in
> >           about equal measure and folio->flags being modified in
> >           gup_fast_fallback(). The modifications then, unsurprisingly, cause a
> >           lot of cache misses for reads (like in bio_set_pages_dirty() and
> >           bio_check_pages_dirty()).
> > 
> >   willy | andres: that makes perfect sense, thanks
> >   willy | really, the only way to fix that is to split it up
> >   willy | and either we can split it per-cpu or per-physical-address-range
> 
> As discussed, even better is "not repeatedly pinning/unpinning" at all :)

Indeed ;)


> I'm curious, are multiple processes involved, or is this all within a single
> process?

In the test case here multiple processes are involved, I was testing a
parallel sequential scan, with a high limit to the paralellism.

There are cases in which a fair bit of read IO is done from a single proccess
(e.g. to prerewarm the buffer pool after a restart, that's currently done by a
single process), but it's more common for high throughput to happen across
multiple processes. With modern drives a single task won't be able to execute
non-trivial queries at full disk speed.

Greetings,

Andres Freund




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux