On 7/19/16 9:56 AM, trafdev wrote:
Will extending page to say 128K improve performance?
Well, you can't go to more than 32K, but yes, it might.
Even then, I think your biggest problem is that the data locality is too
low. You're only grabbing ~3 rows every time you read a buffer that
probably contains ~20 rows. So that's an area for improvement. The other
thing that would help a lot is to trim the table down so it's not as wide.
Actually, something else that could potentially help a lot is to store
arrays of many data points in each row, either by turning each column
into an array or storing an array of a composite type. [1] is exploring
those ideas right now.
You could also try cstore_fdw. It's not a magic bullet, but it's storage
will be much more efficient than what you're doing right now.
[1] https://github.com/ElephantStack/ElephantStack
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance