Ben Carbery wrote: > I have a postgres server I need to move onto a new OS (RHEL6) on a new VM and am looking for advice on > how to partition the disks to gain some performance improvement. > > In the current environment I am given a single VHDD which I have not partitioned at all. The SAN > performance is pretty good, but we have noticed slowdowns at various times.. The database does a lot > of logging - constant small writes, with some probably insignificant reads of smaller tables. Delays > in logging can effect the service which is doing the logging and cause problems upstream. Typically > this does not happen, but there are hourly jobs which generate stats from the logs. Due to their > complexity the query planner always chooses to do sequential scans on the main log table. This table > is truncated monthly when the data is archived to another table, but peaks in size at around 10GB at > the end of the month. Generally any time the stats jobs are running there are delays which I would > like to reduce/eliminate. There is also a fair bit of iowait on the cpu. > > The new server has a great deal more memory which I am hoping will help (shared_buffers = 8GB, total > RAM 20GB), but I am looking at what might be optimal for the storage configuration. From looking at > previous conversations here I am thinking of something like this.. > > 100GB OS (ext3) > 50GB pg_xlog (ext2) > 400GB pg_data (ext3 data=writeback noatime?) > > Hopefully this would mean the small writes can continue while a large read is going. Currently there > is no streaming replication so only a gig or so is actually needed for xlogs. We do however use slony > to sync some smaller tables to a secondary which may or may not affect anything. > > This is the first time I have needed to delve into the storage configuration before of a database > before so any advice or comments welcome. Since you are on RHEL 6 I would use ext4 throughout. You say you have I/O problems when "stats jobs" run. Can you describe those jobs and what they are doing? If you have a lot of sequential scans on a 10GB table, that will suck majorly no matter how you tune it. Two more things that you can try out: - Change the I/O scheduler to "deadline" by booting with "elevator=deadline". - Split the 400GB LUN into several smaller LUNs and use tablespaces. I don't say that that is guaranteed to help, but I have made good experiences with it. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general