Dne 12.9.2011 00:44, Anthony Presley napsal(a): > We've currently got PG 8.4.4 running on a whitebox hardware set up, > with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM > SATA drives, using the onboard IDE controller and ext3. > > A few weeks back, we purchased two refurb'd HP DL360's G5's, and > were hoping to set them up with PG 9.0.2, running replicated. These > machines have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and > are using the HP SA P400i with 512MB of BBWC. PG is running on an > ext4 (noatime) partition, and they drives configured as RAID 1+0 > (seems with this controller, I cannot do JBOD). I've spent a few > hours going back and forth benchmarking the new systems, and have set > up the DWC, and the accelerator cache using hpacucli. I've tried > accelerator caches of 25/75, 50/50, and 75/25. Whas is an 'accelerator cache'? Is that the cache on the controller? Then give 100% to the write cache - the read cache does not need to be protected by the battery, the page cache at the OS level can do the same service. Provide more details about the ext3/ext4 - there are various data modes (writeback, ordered, journal), various other settings (barriers, stripe size, ...) that matter. According to the benchmark I've done a few days back, the performance difference between ext3 and ext4 is rather small, when comparing equally configured file systems (i.e. data=journal vs. data=journal) etc. With read-only workload (e.g. just SELECT statements), the config does not matter (e.g. journal is just as fast as writeback). See for example these comparisons read-only workload: http://bit.ly/q04Tpg read-write workload: http://bit.ly/qKgWgn The ext4 is usually a bit faster than equally configured ext3, but the difference should not be 100%. > To start with, I've set the "relevant" parameters in postgresql.conf > the same on the new config as the old: > > max_connections = 150 shared_buffers = 6400MB (have tried as high as > 20GB) work_mem = 20MB (have tried as high as 100MB) > effective_io_concurrency = 6 fsync = on synchronous_commit = off > wal_buffers = 16MB checkpoint_segments = 30 (have tried 200 when I > was loading the db) random_page_cost = 2.5 effective_cache_size = > 10240MB (have tried as high as 16GB) > > First thing I noticed is that it takes the same amount of time to > load the db (about 40 minutes) on the new hardware as the old > hardware. I was really hoping with the faster, additional drives and > a hardware RAID controller, that this would be faster. The database > is only about 9GB with pg_dump (about 28GB with indexes). > > Using pgfouine I've identified about 10 "problematic" SELECT queries > that take anywhere from .1 seconds to 30 seconds on the old > hardware. Running these same queries on the new hardware is giving me > results in the .2 to 66 seconds. IE, it's twice as slow. > > I've tried increasing the shared_buffers, and some other parameters > (work_mem), but haven't yet seen the new hardware perform even at > the same speed as the old hardware. In that case some of the assumptions is wrong. For example the new RAID is slow for some reason. Bad stripe size, slow controller, ... Do the basic hw benchmarking, i.e. use bonnie++ to benchmark the disk, etc. Only if this provides expected results (i.e. the new hw performs better) it makes sense to mess with the database. Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance