On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:
Larger blocksizes also reduce IOPS (I/Os per second) which might be a critial threshold on storage systems (e.g. Fibre Channel systems).
True to some extent, but don't forget that IOPS is always relative to a block size in the first place. If you're getting 200 IOPS with 8K blocks, increasing your block size to 128K will not result in your getting 200 IOPS at that larger size; the IOPS number at the larger block size is going to drop too. And you'll pay the penalty for that IOPS number dropping every time you're accessing something that would have only been an 8K bit of I/O before.
The trade-off is very application dependent. The position you're advocating, preferring larger blocks, only makes sense if your workload consists mainly of larger scans. Someone who is pulling scattered records from throughout a larger table will suffer with that same change, because they'll be reading a minimum of 128K even if all they really needed with a few bytes. That penalty ripples all the way from the disk I/O upwards through the buffer cache.
It's easy to generate a synthetic benchmark workload that models some real-world applications and see performance plunge with a larger block size. There certainly are others where a larger block would work better. Testing either way is complicated by the way RAID devices usually have their own stripe sizes to consider on top of the database block size.
-- * Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general