Hi, I can see two issues making you get variable results:
1/ Number of clients > scale factor
Using -c16 and -s 6 means you are largely benchmarking lock contention
for a row in the branches table (it has 6 rows in your case). So
randomness in *which* rows each client tries to lock will make for
unwanted variation.
2/ Short run times
That 1st run is 5s duration. This will be massively influenced by the
above point about randomness for locking a branches row.
I'd recommend:
- always run at least -T600
- use -s of at least 1.5x your largest -c setting (I usually use -s 100
for testing 1-32 clients).
regards
Mark
On 17/12/18 12:58 AM, Mariel Cherkassky wrote:
As Greg suggested, update you all that each vm has its own dedicated
esx. Every esx has it`s own local disks.
I run it one time on two different servers that has the same hardware
and same postgresql db (version and conf). The results :
pgbench -i -s 6 pgbench -p 5432 -U postgres
pgbench -c 16 -j 4 -T 5 -U postgres pgbench
MACHINE 1
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 6
query mode: simple
number of clients: 16
number of threads: 4
duration: 5 s
number of transactions actually processed: 669
latency average = 122.633 ms
tps = 130.470828 (including connections establishing)
tps = 130.620286 (excluding connections establishing)
MACHINE 2
pgbench -c 16 -j 4 -T 600 -U postgres -p 5433 pgbench
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 6
query mode: simple
number of clients: 16
number of threads: 4
duration: 600 s
number of transactions actually processed: 2393723
latency average = 4.011 ms
tps = 3989.437514 (including connections establishing)
tps = 3989.473036 (excluding connections establishing)
any idea what can cause such a difference ? Both of the machines have
20core and 65GB of ram.
בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Mariel Cherkassky
<mariel.cherkassky@xxxxxxxxx <mailto:mariel.cherkassky@xxxxxxxxx>>:
Ok, I'll do that. Thanks .
בתאריך יום ה׳, 13 בדצמ׳ 2018 ב-15:54 מאת Greg Clough
<Greg.Clough@xxxxxxxxxxxxx <mailto:Greg.Clough@xxxxxxxxxxxxx>>:
Hmmm... sounds like you’ve got most of it covered. It may be
a good idea to send that last message back to the list, as
maybe others will have better ideas.
Greg.
*From:* Mariel Cherkassky <mariel.cherkassky@xxxxxxxxx
<mailto:mariel.cherkassky@xxxxxxxxx>>
*Sent:* Thursday, December 13, 2018 1:45 PM
*To:* Greg Clough <Greg.Clough@xxxxxxxxxxxxx
<mailto:Greg.Clough@xxxxxxxxxxxxx>>
*Subject:* Re: pgbench results arent accurate
Both of the machines are the only vms in a dedicated esx for
each one. Each esx has local disks.
On Thu, Dec 13, 2018, 3:05 PM Greg Clough
<Greg.Clough@xxxxxxxxxxxxx <mailto:Greg.Clough@xxxxxxxxxxxxx>
wrote:
> I installed a new postgres 9.6 on both of my machines.
Where is your storage? Is it local, or on a SAN? A SAN
will definitely have a cache, so possibly there is another
layer of cache that you’re not accounting for.
Greg Clough.
------------------------------------------------------------------------
This e-mail, including accompanying communications and
attachments, is strictly confidential and only for the
intended recipient. Any retention, use or disclosure not
expressly authorised by IHSMarkit is prohibited. This
email is subject to all waivers and other terms at the
following link:
https://ihsmarkit.com/Legal/EmailDisclaimer.html
Please visit www.ihsmarkit.com/about/contact-us.html
<http://www.ihsmarkit.com/about/contact-us.html> for
contact information on our offices worldwide.
------------------------------------------------------------------------
This e-mail, including accompanying communications and
attachments, is strictly confidential and only for the
intended recipient. Any retention, use or disclosure not
expressly authorised by IHSMarkit is prohibited. This email is
subject to all waivers and other terms at the following link:
https://ihsmarkit.com/Legal/EmailDisclaimer.html
Please visit www.ihsmarkit.com/about/contact-us.html
<http://www.ihsmarkit.com/about/contact-us.html> for contact
information on our offices worldwide.