Re: Benchmark: Dell/Perc 6, 8 disk RAID 10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 14 Mar 2008, Justin wrote:

I played with shared_buffer and never saw much of an improvement from
100 all the way up to 800 megs  moved the checkpoints from 3 to 30 and
still never saw no movement in the numbers.

Increasing shared_buffers normally improves performance as the size of the database goes up, but since the pgbench workload is so simple the operating system will cache it pretty well even if you don't give the memory directly to PostgreSQL. Also, on Windows large settings for shared_buffers don't work very well, you might as well keep it in the 100MB range.

wal_sync_method=fsync

You might get a decent boost in resuls that write data (not the SELECT ones) by changing

wal_sync_method = open_datasync

which is the default on Windows. The way you've got your RAID controller setup, this is no more or less safe than using fsync.

i agree with you, those numbers are terrible i realized after posting i had the option -C turned on if i read the option -C correctly it is disconnecting and reconnecting between transactions. The way read -C option creates the worst case.

In addition to being an odd testing mode, there's an outstanding bug in how -C results are computed that someone submitted a fix for, but it hasn't been applied yet. I would suggest forgetting you ever ran that test.

number of clients: 10
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
tps = 1768.940935 (including connections establishing)

number of clients: 40
number of transactions per client: 10000
number of transactions actually processed: 400000/400000
tps = 567.149831 (including connections establishing)
tps = 568.648692 (excluding connections establishing)

Note how the total number of transactions goes up here, because it's actually doing clients x requested transcations in total. The 40 client case is actually doing 4X as many total operations. That also means you can expect 4X as many checkpoints during that run. It's a longer run like this second one that you might see some impact by increasing checkpoint_segments.

To keep comparisons like this more fair, I like to keep the total transactions constant and just divide that number by the number of clients to figure out what to set the -t parameter to. 400000 is a good medium length test, so for that case you'd get

-c 10 -t 40000
-c 40 -t 10000

as the two to compare.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux