On Aug 17, 2011, at 9:44 AM, Tomas Vondra wrote: > On 17 Srpen 2011, 3:35, Ogden wrote: >> Hope all is well. I have received tremendous help from this list prior and >> therefore wanted some more advice. >> >> I bought some new servers and instead of RAID 5 (which I think greatly >> hindered our writing performance), I configured 6 SCSI 15K drives with >> RAID 10. This is dedicated to /var/lib/pgsql. The main OS has 2 SCSI 15K >> drives on a different virtual disk and also Raid 10, a total of 146Gb. I >> was thinking of putting Postgres' xlog directory on the OS virtual drive. >> Does this even make sense to do? > > Yes, but it greatly depends on the amount of WAL and your workload. If you > need to write a lot of WAL data (e.g. during bulk loading), this may > significantly improve performance. It may also help when you have a > write-heavy workload (a lot of clients updating records, background writer > etc.) as that usually means a lot of seeking (while WAL is written > sequentially). The database is about 200Gb so using /usr/local/pgsql/pg_xlog on a virtual disk with 100Gb should not be a problem with the disk space should it? >> The system memory is 64GB and the CPUs are dual Intel E5645 chips (they >> are 6-core each). >> >> It is a dedicated PostgreSQL box and needs to support heavy read and >> moderately heavy writes. > > What is the size of the database? So those are the new servers? What's the > difference compared to the old ones? What is the RAID controller, how much > write cache is there? > I am sorry I overlooked specifying this. The database is about 200Gb and yes these are new servers which bring more power (RAM, CPU) over the last one. The RAID Controller is a Perc H700 and there is 512Mb write cache. The servers are Dells. >> Currently, I have this for the current system which as 16Gb Ram: >> >> max_connections = 350 >> >> work_mem = 32MB >> maintenance_work_mem = 512MB >> wal_buffers = 640kB > > Are you really using 350 connections? Something like "#cpus + #drives" is > usually recommended as a sane number, unless the connections are idle most > of the time. And even in that case a pooling is recommended usually. > > Anyway if this worked fine for your workload, I don't think you need to > change those settings. I'd probably bump up the wal_buffers to 16MB - it > might help a bit, definitely won't hurt and it's so little memory it's not > worth the effort I guess. So just increasing the wal_buffers is okay? I thought there would be more as the memory in the system is now 4 times as much. Perhaps shared_buffers too (down below). >> >> # This is what I was helped with before and made reporting queries blaze >> by >> seq_page_cost = 1.0 >> random_page_cost = 3.0 >> cpu_tuple_cost = 0.5 >> effective_cache_size = 8192MB > > Are you sure the cpu_tuple_cost = 0.5 is correct? That seems a bit crazy > to me, as it says reading a page sequentially is just twice as expensive > as processing it. This value should be abou 100x lower or something like > that. These settings are for the old server, keep in mind. It's a 16GB machine (the new one is 64Gb). The value for cpu_tuple_cost should be 0.005? How are the other ones? > What are the checkpoint settings (segments, completion target). What about > shared buffers? #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 - was 0.5 #checkpoint_warning = 30s # 0 disables And shared_buffers = 4096MB Thank you very much Ogden -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance