Scott Marlowe wrote:
involves tiny bits of data scattered throughout the database. Our
current database is about 20-25 Gig, which means it's quickly reaching
the point where it will not fit in our 32G of ram, and it's likely to
grow too big for 64Gig before a year or two is out.
...
I wonder how many hard drives it would take to be CPU bound on random
access patterns? About 40 to 60? And probably 15k / SAS drives to
Well, its not a very big database and you're seek bound - so what's
wrong with the latest
generation flash drives? They're perfect for what you want to do it
seems, and you can
probably get what you need using the new ARC cache on flash stuff in ZFS.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance