Hi, On 12 August 2011 14:57, Greg Smith <greg@xxxxxxxxxxxxxxx> wrote: > ioDrive hardware is fast at executing all sorts of I/O, but it particularly > excels compared to normal drives with really random workloads. That's what I hope for :). It looks like that ioDrive is 3 to 5 times faster for seq IO comparing to our SAN > Your tables are pretty big; not much of them will fit in memory. If your > aggregated queries end up executing a lot of sequential scans of the data > set in order to compute, or for them to be utilized, you will probably > discover this is barely faster on FusionIO. About 99% disk activity is random IO (reads). Seq IO is caused by reading last X minutes of data by aggregates cron job. Majority of the queries are primary key scan queries plus some extra where condition filtering. Default random_page_cost and seq_page_cost wasn't way to go because planner chose seq scan and query execution took so long.... > Is there a component to your workload that does a lot of random read or > write requests? If so, is that chunk of the data set bigger than RAM, but > small enough to fit on the FusionIO drive? Only when all those conditions > are true does that hardware really make sense. Yes, almost all queries do random IO. Final result set is the top 1000 rows only but several weeks of data must be read and sorted before applying the limit. I have two cards available (= 2.4TB) so I can have entire dataset there. > But at the same time, tests on database sizes that fit into RAM were slower > on FusionIO than the regular disk array. When there's no random I/O to > worry about, the slower read/write write of the SSD meant it lost the small > database tests. Makes sense to me. >> checkpoint_segments | 48 >> maintenance_work_mem | 256MB >> shared_buffers | 9GB >> wal_buffers | 50MB >> work_mem | 256MB >> > > checkpoint_segments should be higher, at least 64 and probably 128 to 256. > shared_buffers should be lower (at most 8GB, maybe even less). > maintenance_work_mem should be 1 to 2GB on a server with 80GB of RAM. > There's no proven benefit to increasing wal_buffers over 16MB. Yes, shared_buffers are high; based on your input and other sources I would like to try lower values. I really do not understand this part of the "magic" - I mean checkpoint_segments and WAL related settings. > This setting for work_mem can easily allow your server to allocate over > 250GB of RAM for query working memory, if all 100 connections do something. > Either reduce that a lot, or decrease max_connections, if you want this > server to run safely. Yes, I'm aware about this issue. There nothing like pg-pool between Apache (PHP) and Postgres and there is no more than 15 connections simultaneously.... Thanks, -- Ondrej Ivanic (ondrej.ivanic@xxxxxxxxx) -- Ondrej Ivanic (ondrej.ivanic@xxxxxxxxx) -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general