On Tue, Dec 9, 2008 at 10:35 AM, Matthew Wakeling <matthew@xxxxxxxxxxx> wrote: > On Tue, 9 Dec 2008, Scott Marlowe wrote: >> >> I wonder how many hard drives it would take to be CPU bound on random >> access patterns? About 40 to 60? And probably 15k / SAS drives to >> boot. Cause that's what we're looking at in the next few years where >> I work. > > There's a problem with that thinking. That is, in order to exercise many > spindles, you will need to have just as many (if not more) concurrent > requests. And if you have many concurrent requests, then you can spread them > over multiple CPUs. So it's more a case of "How many hard drives PER CPU". > It also becomes a matter of whether Postgres can scale that well. For us, all that is true. We typically have a dozen or more concurrent requests running at once. We'll likely see that increase linearly with our increase in users over the next year or so. We bought the machines with dual quad core opterons knowing the 6,8 and 12 core opterons were due out on the same socket design in the next year or so and we could upgrade those too if needed. PostgreSQL seems to scale well in most tests I've seen to at least 16 cores, and after that it's anyone's guess. The Sparc Niagra seems capable of scaling to 32 threads on 8 cores with pgsql 8.2 quite well. I worry about the linux kernel scaling that well, and we might have to look at open solaris or something like the solaris kernel under ubuntu distro to get better scaling. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance