On 7/9/16 12:26 PM, trafdev wrote:
So does that mean Postgres is not capable to scan\aggregate less than 10
mln rows and deliver result in less than 2 seconds?
That's going to depend entirely on your hardware, and how big the rows
are. At some point you're simply going to run out of memory bandwidth,
especially since your access pattern is very scattered.
On 07/06/16 09:46, trafdev wrote:
Well, our CPU\RAM configs are almost same...
The difference is - you're fetching\grouping 8 times less rows than I:
Huh? The explain output certainly doesn't show that.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance