Re: Reliability recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Luke Lonergan wrote:

> 
> OK, how about some proof?
> 
> In a synthetic test that writes 32GB of sequential 8k pages on a machine
> with 16GB of RAM:
> ========================= Write test results ==============================
> time bash -c "dd if=/dev/zero of=/dbfast1/llonergan/bigfile bs=8k
> count=2000000 && sync" &
> time bash -c "dd if=/dev/zero of=/dbfast3/llonergan/bigfile bs=8k
> count=2000000 && sync" &
> 
> 2000000 records in
> 2000000 records out
> 2000000 records in
> 2000000 records out
> 
> real    1m0.046s
> user    0m0.270s
> sys     0m30.008s
> 
> real    1m0.047s
> user    0m0.287s
> sys     0m30.675s
> 
> So that's 32,000 MB written in 60.05 seconds, which is 533MB/s sustained
> with two threads.
> 

Well, since this is always fun (2G memory, 3Ware 7506, 4xPATA), writing:

$ dd if=/dev/zero of=/data0/dump/bigfile bs=8k count=500000
500000+0 records in
500000+0 records out
4096000000 bytes transferred in 32.619208 secs (125570185 bytes/sec)

> Now to read the same files in parallel:
> ========================= Read test results ==============================
> sync
> time dd of=/dev/null if=/dbfast1/llonergan/bigfile bs=8k &
> time dd of=/dev/null if=/dbfast3/llonergan/bigfile bs=8k &
> 
> 2000000 records in
> 2000000 records out
> 
> real    0m39.849s
> user    0m0.282s
> sys     0m22.294s
> 2000000 records in
> 2000000 records out
> 
> real    0m40.410s
> user    0m0.251s
> sys     0m22.515s
> 
> And that's 32,000MB in 40.4 seconds, or 792MB/s sustained from disk (not
> memory).
> 

Reading:

$ dd of=/dev/null if=/data0/dump/bigfile bs=8k count=500000
500000+0 records in
500000+0 records out
4096000000 bytes transferred in 24.067298 secs (170189442 bytes/sec)

Ok - didn't quite get my quoted 175MB/s, (FWIW if bs=32k I get exactly
175MB/s).

Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably
competitive on (MB/s)/$....


It would be interesting to see what Dan's system would do on a purely
sequential workload - as 40-50MB of purely random IO is high.

Cheers

Mark


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux