Re: Testing Sandforce SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Scott Marlowe wrote:
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith <greg@xxxxxxxxxxxxxxx> wrote:
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to blocks, I wouldn't be
surprised if switching to smaller blocks helps it in a lot of situations if
one went looking for them.  Also, as you point out, pgbench runtime varies
around wildly enough that 10% would need more investigation to really prove
that means something.  But I think Yeb has done plenty of investigation into
the most interesting part here, the durability claims.
Please note that the 10% was on a slower CPU. On a more recent CPU the difference was 47%, based on tests that ran for an hour. That's why I absolutely agree with Merlin Moncure that more testing in this department is welcome, preferably by others since after all I could be on the pay roll of OCZ :-)

I looked a bit into Bonnie++ but fail to see how I could do a test that somehow matches the PostgreSQL setup during the pgbench tests (db that fits in memory, so the test is actually how fast the ssd can capture sequential WAL writes and fsync without barriers, mixed with an occasional checkpoint with random write IO on another partition). Since the WAL writing is the same for both block_size setups, I decided to compare random writes to a file of 5GB with Oracle's Orion tool:

=== 4K test summary ====
ORION VERSION 11.1.0.7.0

Commandline:
-testname test -run oltp -size_small 4 -size_large 1024 -write 100

This maps to this test:
Test: test
Small IO size: 4 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
Large Columns:,      0
Total Data Points: 21

Name: /mnt/data/5gb     Size: 5242880000
1 FILEs found.

Maximum Small IOPS=86883 @ Small=8 and Large=0
Minimum Small Latency=0.01 @ Small=1 and Large=0

=== 8K test summary ====

ORION VERSION 11.1.0.7.0

Commandline:
-testname test -run oltp -size_small 8 -size_large 1024 -write 100

This maps to this test:
Test: test
Small IO size: 8 KB
Large IO size: 1024 KB
IO Types: Small Random IOs, Large Random IOs
Simulated Array Type: CONCAT
Write: 100%
Cache Size: Not Entered
Duration for each Data Point: 60 seconds
Small Columns:, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
Large Columns:,      0
Total Data Points: 21

Name: /mnt/data/5gb     Size: 5242880000
1 FILEs found.

Maximum Small IOPS=48798 @ Small=11 and Large=0
Minimum Small Latency=0.02 @ Small=1 and Large=0
Running the tests for longer helps a lot on reducing the noisy
results.  Also letting them runs longer means that the background
writer and autovacuum start getting involved, so the test becomes
somewhat more realistic.
Yes, that's why I did a lot of the TPC-B tests with -T 3600 so they'd run for an hour. (also the 4K vs 8K blocksize in postgres).

regards,
Yeb Havinga


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux