Exciting! Since I still have the same performance issues on compressed btrfs, I'm looking forward to testing the patches, probably when a 17 Beta is out and I can find binaries on my platform (OpenSUSE). It looks like it will make a huge difference. Thank you for persisting and getting this through. Dimitris On 12 April 2024 07:45:52 CEST, Thomas Munro <thomas.munro@xxxxxxxxx> wrote: >On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou <jimis@xxxxxxx> wrote: >> So would it make sense for postgres to perform reads in bigger blocks? Is it >> easy-ish to implement (where would one look for that)? Or must the I/O unit be >> tied to postgres' page size? > >FYI as of last week we can do a little bit of that on the master branch: > >postgres=# select count(*) from t; > >preadv(46, ..., 8, 256237568) = 131072 >preadv(46, ..., 5, 256368640) = 131072 >preadv(46, ..., 8, 256499712) = 131072 >preadv(46, ..., 5, 256630784) = 131072 > >postgres=# set io_combine_limit = '256k'; >postgres=# select count(*) from t; > >preadv(47, ..., 5, 613728256) = 262144 >preadv(47, ..., 5, 613990400) = 262144 >preadv(47, ..., 5, 614252544) = 262144 >preadv(47, ..., 5, 614514688) = 262144 > >Here's hoping the commits implementing this stick, for the PostgreSQL >17 release. It's just the beginning though, we can only do this for >full table scans so far (plus a couple of other obscure places). >Hopefully in the coming year we'll get the "streaming I/O" mechanism >that powers this hooked up to lots more places... index scans and >other stuff. And writing. Then eventually pushing the I/O into the >background. Your questions actually triggered us to talk about why we >couldn't switch a few things around in our project and get the I/O >combining piece done sooner. Thanks!