Re: Testing Sandforce SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga <yebhavinga@xxxxxxxxx> wrote:
Yeb Havinga wrote:
Due to the LBA remapping of the SSD, I'm not sure of putting files that are sequentially written in a different partition (together with e.g. tables) would make a difference: in the end the SSD will have a set new blocks in it's buffer and somehow arrange them into sets of 128KB of 256KB writes for the flash chips. See also http://www.anandtech.com/show/2899/2

But I ran out of ideas to test, so I'm going to test it anyway.
Same machine config as mentioned before, with data and xlog on separate partitions, ext3 with barrier off (save on this SSD).

pgbench -c 10 -M prepared -T 3600 -l test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 300
query mode: prepared
number of clients: 10
duration: 3600 s
number of transactions actually processed: 10856359
tps = 3015.560252 (including connections establishing)
tps = 3015.575739 (excluding connections establishing)

This is about 25% faster than data and xlog combined on the same filesystem.


The trick may be in kjournald for which there is 1 for each ext3 journalled file system.  I learned back in Red Hat 4 pre U4 kernels there was a problem with kjournald that would either cause 30 second hangs or lock up my server completely when pg_xlog and data were on the same file system plus a few other "right" things going on.

Given the multicore world we have today, I think it makes sense that multiple ext3 file systems, and the kjournald's that service them, is faster than a single combined file system.


Greg


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux