Re: Raid 10 chunksize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/1/09 1:44 PM, "Stef Telford" <stef@xxxxxxxxx> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Stef Telford wrote:
>> Stef Telford wrote:
>> Fyi, I got my intel x25-m in the mail, and I have been benching it
>> for the past hour or so. Here are some of the rough and ready
>> figures. Note that I don't get anywhere near the vertex benchmark.
>> I did hotplug it and made the filesystem using Theodore Ts'o
>> webpage directions (
>> http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-
>> block-size/
>>  ) ; The only thing is, ext3/4 seems to be fixated on a blocksize
>> of 4k, I am wondering if this could be part of the 'problem'. Any
>> ideas/thoughts on tuning gratefully received.
>> 
>> Anyway, benchmarks (same system as previously, etc)
>> 
>> (ext4dev, 4k block size, pg_xlog on 2x7.2krpm raid-0, rest on SSD)
>> 
>> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000
>> test_db starting vacuum...end. transaction type: TPC-B (sort of)
>> scaling factor: 100 number of clients: 24 number of transactions
>> per client: 12000 number of transactions actually processed:
>> 288000/288000 tps = 1407.254118 (including connections
>> establishing) tps = 1407.645996 (excluding connections
>> establishing)
>> 
>> (ext4dev, 4k block size, everything on SSD)
>> 
>> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000
>> test_db starting vacuum...end. transaction type: TPC-B (sort of)
>> scaling factor: 100 number of clients: 24 number of transactions
>> per client: 12000 number of transactions actually processed:
>> 288000/288000 tps = 2130.734705 (including connections
>> establishing) tps = 2131.545519 (excluding connections
>> establishing)
>> 
>> (I wanted to try and see if random_page_cost dropped down to 2.0,
>> sequential_page_cost = 2.0 would make a difference. Eg; making the
>> planner aware that a random was the same cost as a sequential)
>> 
>> root@debian:/var/lib/postgresql/8.3/main#
>> /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db starting
>> vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100
>>  number of clients: 24 number of transactions per client: 12000
>> number of transactions actually processed: 288000/288000 tps =
>> 1982.481185 (including connections establishing) tps = 1983.223281
>> (excluding connections establishing)
>> 
>> 
>> Regards Stef
> 
> Here is the single x25-m SSD, write cache -disabled-, XFS, noatime
> mounted using the no-op scheduler;
> 
> stef@debian:~$ sudo /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000
> test_db
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 100
> number of clients: 24
> number of transactions per client: 12000
> number of transactions actually processed: 288000/288000
> tps = 1427.781843 (including connections establishing)
> tps = 1428.137858 (excluding connections establishing)


Ok, in my experience the next step to better performance on this setup in
situations not involving pg_bench is to turn dirty_background_ratio down to
a very small number (1 or 2).  However, pg_bench relies quite a bit on the
OS postponing writes due to its quirkiness. Depending on the scaling factor
to memory ratio and how big shared_buffers is, results may vary.

So I'm not going to predict that that will help this particular case, but am
commenting that in general I have gotten the best throughput and lowest
latency with a low dirty_background_ratio and the noop scheduler when using
the Intel SSDs.  I've tried all the other scheduler and queue tunables,
without much result.  Increasing max_sectors_kb helped a bit in some cases,
but it seemed inconsistent.

The Vertex does some things differently that might be very good for postgres
(but bad for some other apps) as from what I've seen it prioritizes writes
more.

Furthermore, it has and uses a write cache from what I've read... The Intel
drives don't use a write cache at all (The RAM is for the LBA > Physical map
and management).  If the vertex is way faster, I would suspect that its
write cache may not be properly honoring cache flush commands.

I have an app where I wish to keep the read latency as low as possible while
doing a large batch write with the write at ~90% disk utilization, and the
Intels destroy everything else at that task so far.

And in all honesty, I trust the Intel's data integrity a lot more than OCZ
for now.

> 
> Regards
> Stef
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iEYEARECAAYFAknT0hEACgkQANG7uQ+9D9X8zQCfcJ+tRQ7Sh6/YQImPejfZr/h4
> /QcAn0hZujC1+f+4tBSF8EhNgR6q44kc
> =XzG/
> -----END PGP SIGNATURE-----
> 
> 
> --
> Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
> 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux