On 05/03/2013 01:11, Mike McCann wrote:
Hello,
Hello,
We are in the fortunate situation of having more money than
time to help solve our PostgreSQL 9.1 performance problem.
Our server hosts databases that are about 1 GB in size with
the largest tables having order 10 million 20-byte indexed
records. The data are loaded once and then read from a web app
and other client programs. Some of the queries execute ORDER BY
on the results. There are typically less than a dozen read-only
concurrent connections to any one database.
I would first check the spurious queries .. 10 millions rows isn't
that huge. Perhaps you could paste your queries and an explain
analyze of them ..? You could also log slow queries and use the
auto_explain module
SELECTs for data are taking 10s of seconds. We'd like to
reduce this to web app acceptable response times (less than 1
second). If this is successful then the size of the database
will grow by a factor of ten - we will still want sub-second
response times. We are in the process of going through the
excellent suggestions in the "PostgreSQL 9.0 High Performance"
book to identify the bottleneck (we have reasonable suspicions
that we are I/O bound), but would also like to place an order
soon for the dedicated server which will host the production
databases. Here are the specs of a server that we are
considering with a budget of $13k US:
Dual Intel Xeon 2.4GHz 4-core E5-2609
CPUs
2x146GB 15K SAS hard drives
+ the usual accessories (optical
drive, rail kit, dual power supplies)
Opinions?
Thanks in advance for any suggestions you have.
-Mike
--
Mike McCann
Software Engineer
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road
Moss Landing, CA 95039-9644
Voice: 831.775.1769 Fax: 831.775.1736 http://www.mbari.org
--
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.