Re: Performance question 83 GB Table 150 million rows, distinct select

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 17, 2011 at 11:17 AM, Aidan Van Dyk <aidan@xxxxxxxxxxx> wrote:
> But remember, you're doing all that in a single query.  So your disk
> subsystem might even be able to perform even more *througput* if it
> was given many more concurrent request.  A big raid10 is really good
> at handling multiple concurrent requests.  But it's pretty much
> impossible to saturate a big raid array with only a single read
> stream.

The query uses a bitmap heap scan, which means it would benefit from a
high effective_io_concurrency.

What's your effective_io_concurrency setting?

A good place to start setting it is the number of spindles on your
array, though I usually use 1.5x that number since it gives me a
little more thoughput.

You can set it on a query-by-query basis too, so you don't need to
change the configuration. If you do, a reload is enough to make PG
pick it up, so it's an easy thing to try.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux