Just some ideas that went through my mind when reading your post. On Wed, Nov 3, 2010 at 17:52, Nick Matheson <Nick.D.Matheson@xxxxxxxx> wrote: > than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is > due to the storage overhead we have observed in Postgres. In the > example below, it takes 1 GB to store 350 MB of nominal data. PostgreSQL 8.3 and later have 22 bytes of overhead per row, plus page-level overhead and internal fragmentation. You can't do anything about row overheads, but you can recompile the server with larger pages to reduce page overhead. > Is there any way using stored procedures (maybe C code that calls > SPI directly) or some other approach to get close to the expected 35 > MB/s doing these bulk reads? Perhaps a simpler alternative would be writing your own aggregate function with four arguments. If you write this aggregate function in C, it should have similar performance as the sum() query. Regards, Marti -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance