Good question. I'm in the process of completing more exhaustive tests with the various disk i/o schedulers.
Basic findings so far: it depends on what type of concurrency is going on. Deadline has the best performance over a range of readahead values compared to cfq or anticipatory with concurrent sequential reads with xfs. However, mixing random and sequential reads puts cfq ahead with low readahead values and deadline ahead with large readahead values (I have not tried anticipatory here yet). However, your preference for prioritizing streaming over random will significantly impact which you would want to use and at what readahead value -- cfq does a better job at being consistent balancing the two, deadline swings strongly to being streaming biased as the readahead value gets larger and random biased when it is low. Deadline and CFQ are similar with concurrent random reads. I have not gotten to any write tests or concurrent read/write tests.
I expect the anticipatory scheduler to perform worse with mixed loads -- anything asking a raid array that can do 1000 iops to wait for 7 ms and do nothing just in case a read in the same area might occur is a bad idea for aggregate concurrent throughput. It is a scheduler that assumes the underlying hardware is essentially one spindle -- which is why it is so good in a standard PC or laptop. But, I could be wrong.
Basic findings so far: it depends on what type of concurrency is going on. Deadline has the best performance over a range of readahead values compared to cfq or anticipatory with concurrent sequential reads with xfs. However, mixing random and sequential reads puts cfq ahead with low readahead values and deadline ahead with large readahead values (I have not tried anticipatory here yet). However, your preference for prioritizing streaming over random will significantly impact which you would want to use and at what readahead value -- cfq does a better job at being consistent balancing the two, deadline swings strongly to being streaming biased as the readahead value gets larger and random biased when it is low. Deadline and CFQ are similar with concurrent random reads. I have not gotten to any write tests or concurrent read/write tests.
I expect the anticipatory scheduler to perform worse with mixed loads -- anything asking a raid array that can do 1000 iops to wait for 7 ms and do nothing just in case a read in the same area might occur is a bad idea for aggregate concurrent throughput. It is a scheduler that assumes the underlying hardware is essentially one spindle -- which is why it is so good in a standard PC or laptop. But, I could be wrong.
On Mon, Sep 15, 2008 at 9:18 AM, Matthew Wakeling <matthew@xxxxxxxxxxx> wrote:
On Thu, 11 Sep 2008, Scott Carey wrote:
Preliminary summary:
readahead | 8 conc read rate | 1 conc read rate
49152 | 311 | 314
16384 | 312 | 312
12288 | 304 | 309
8192 | 292 |
4096 | 264 |
2048 | 211 |
1024 | 162 | 302
512 | 108 |
256 | 81 | 300
8 | 38 |
What io scheduler are you using? The anticipatory scheduler is meant to prevent this slowdown with multiple concurrent reads.
Matthew
--
And the lexer will say "Oh look, there's a null string. Oooh, there's another. And another.", and will fall over spectacularly when it realises
there are actually rather a lot.
- Computer Science Lecturer (edited)
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance