Is there any way using stored procedures (maybe C code that calls SPI directly) or some other approach to get close to the expected 35 MB/s doing these bulk reads? Or is this the price we have to pay for using SQL instead of some NoSQL solution. (We actually tried Tokyo Cabinet and found it to perform quite well. However it does not measure up to Postgres in terms of replication, data interrogation, community support, acceptance, etc).
Reading from the tables is very fast, what bites you is that postgres has to convert the data to wire format, send it to the client, and the client has to decode it and convert it to a format usable by your application. Writing a custom aggregate in C should be a lot faster since it has direct access to the data itself. The code path from actual table data to an aggregate is much shorter than from table data to the client...
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance