On 11/2/15 2:19 AM, Andrey Osenenko wrote:
It also looks like if there was a way to create a table with just primary key and add an index to it that indexes data from another table, it would work much, much faster since there would be very little to read from disk after index lookup. But looks like there isn't.
That probably wouldn't help as much as you'd hope, because heap tuples in Postgres have a minimum 24 byte overhead. Add in 8 bytes for bigint and that's 32 bytes extra per row.
I think what might gain you more is if you moved 9.2 and got index only scans. Though, if you're getting lossy results, I don't think that'll help [1].
So am I correct in assumption that as the amount of rows grows, query times for rows that are not in memory (and considering how many of them there are, most won't be) will grow linearly?
Maybe, maybe not. Query times for data that has to come from the disk can vary wildly based on what other activity is happening on the IO system. Ultimately, your IO system can only do so many IOs Per Second.
[1] https://wiki.postgresql.org/wiki/Index-only_scans#Index-only_scans_and_index-access_methods
-- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance