Re: driving postgres to achieve benchmark results similar to bonnie++

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes pgbench is running locally.  The CPU typically goes to around 60-70% idle and the bulk of the remaining 10-30 is wait on io.   This is with 4 clients running 10k transactions.  

I've run some permutations of pgbench that pushed wait io higher to something like 80% lwith 4 threads running 16 clients and running on #seconds instead of #transactions, but no better MB/sec than previously explained.










On Tue, May 10, 2016 at 2:38 PM, Wei Shan <weishan.ang@xxxxxxxxx> wrote:
are you running the pgbench on the same server as the postgres database. Can you check if there's a saturation at the CPU?

On 11 May 2016 at 03:02, Mike Broers <mbroers@xxxxxxxxx> wrote:
We have a 10Gb storage network.

bonnie++ with fsync option shows ~ 700MB/sec write, 300MB/sec read, writing a file twice as big as ram.

I am unable to push postgres to get higher than 115MB/sec writes or reads on those same mounts I am verifying are performing well with bonnie++.  This is probably either because my tests arent intense enough or I have misconfigured postgres.  If someone has an idea for how to set up pgbench to really push the io to prove out ssd potential, or a different script/approach that would be awesome.  



On Tue, May 10, 2016 at 1:44 PM, Scott Whitney <scott@xxxxxxxxxxx> wrote:
What is your connection to your SAN?



Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone


-------- Original message --------
From: Mike Broers <mbroers@xxxxxxxxx>
Date: 05/10/2016 1:29 PM (GMT-06:00)
To: John Scalia <jayknowsunix@xxxxxxxxx>
Cc: pgsql-admin@xxxxxxxxxxxxxx
Subject: Re: driving postgres to achieve benchmark results similar to bonnie++

I've made those changes and seen negligible improvements (increase in reads like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers closer to the bonnie++ results or provide an explanation of the ceiling?





On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers@xxxxxxxxx> wrote:
Thanks for the feedback, I'll update those configs, run some more tests, and follow up.



On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix@xxxxxxxxx> wrote:


Sent from my iPad

On May 10, 2016, at 10:48 AM, Mike Broers <mbroers@xxxxxxxxx> wrote:

I'm having trouble getting postgres to drive enough disk activity to get even close to the disk benchmarking I'm getting with bonnie++.   We have SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running on centos 6.

bonnie++ -n 0 -f -b is the command im running, pointing to either primary data or xlog location Im consistently seeing numbers like this:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--

      Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP

23808M           786274  92 157465  29           316097  17  5751  16

So during bonnie++ tests I've confirmed in our monitoring write peaks at 700/800 MB/sec and read peaks around 280/300 MB/sec. 

We have 12GB RAM on the server, when I run pgbench with a scale that sets the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec writes and 80MB/sec.   I'm running with different options such unlogged tables and logged tables, prepared transactions or not, and transaction counts between 1000 and 40000.

I thought a parallel pg_dump / restore might also drive disk but that performance doesnt drive disk throughput either, topping out around 75MB/sec read.  Nightly vacuums also seem to peak below 110MB/sec reads as well.  

Here are the nondefault pg settings: 

max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off

Any idea of if/why postgres might be bottlenecking disk throughput?  Or if there is a method for testing to achieve something closer the bonnie++ levels from within postgres that I am missing?  I'm guessing I'm just not driving enough activity to push it to the limit but I'm not sure of a straightforward method to verify this. 

Thanks,

Mike

Well, I'm no expert with Bonnie, but several of your PostgreSQL settings look incorrect according to the system config you provided. With 12Gb of RAM, shared_buffers should probably be closer to at least 3092Mb. That is if you follow the general suggestion for having that at 1/4 of your available RAM. Also, your max_connections settings looks awfully high. Do you really need 1024 connections? Suggest, if so, that you look into a connection pooling software, like pgpool-II or something.

With an SSD drive, I also doubt your random page cost is that high. With that setting, you may be forcing a lot of queries to use sequential access, not random.

Finally, increase the two work mem settings. I can't recommend any settings without knowing what kind of queries you're running.
--
Jay




Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ 




--
Regards,
Ang Wei Shan


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux