Re: Any better plan for this query?..

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan,

sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a "nice story" about :-)

The client process is a binary compiled with libpq. Client is
interpreting a scenario script and publish via SHM a time spent on
each SQL request. I did not publish sources yet as it'll also require
to explain how to compile them :-)) So for the moment it's shipped as
a freeware, but with time everything will be available (BTW, you're
the first who asking for sources (well, except IBM guys who asked to
get it on POWER boxes, but it's another story :-))

What is good is each client is publishing *live* its internal stats an
we're able to get live data and follow any kind of "waves" in
performance. Each session is a single process, so there is no
contention between clients as you may see on some other tools. The
current scenario script contains 2 selects (representing a Read
transaction) and delete/insert/update (representing Write
transaction). According a start parameters each client executing a
given number Reads per Write. It's connecting on the beginning and
disconnecting at the end of the test.

It's also possible to extend it to do other queries, or simply give to
each client a different scenario script - what's important is to able
to collect then its stats live to understand what's going wrong (if
any)..

I'm planning to extend it and give an easy way to run it against any
database schema, it's only question of time..

Rgds,
-Dimitri

On 5/12/09, Stefan Kaltenbrunner <stefan@xxxxxxxxxxxxxxxx> wrote:
> Dimitri wrote:
>> Folks, before you start to think "what a dumb guy doing a dumb thing" :-))
>> I'll explain you few details:
>>
>> it's for more than 10 years I'm using a db_STRESS kit
>> (http://dimitrik.free.fr/db_STRESS.html) to check databases
>> performance and scalability. Until now I was very happy with results
>> it gave me as it stress very well each database engine internals an
>> put on light some things I should probably skip on other workloads.
>> What do you want, with a time the "fast" query executed before in
>> 500ms now runs within 1-2ms  - not only hardware was improved but also
>> database engines increased their performance a lot! :-))
>
> I was attempting to look into that "benchmark" kit a bit but I find the
> information on that page a bit lacking :( a few notices:
>
> * is the sourcecode for the benchmark actually available? the "kit"
> seems to contain a few precompiled binaries and some source/headfiles
> but there are no building instructions, no makefile or even a README
> which makes it really hard to verify exactly what the benchmark is doing
> or if the benchmark client might actually be the problem here.
>
> * there is very little information on how the toolkit talks to the
> database - some of the binaries seem to contain a static copy of libpq
> or such?
>
> * how many queries per session is the toolkit actually using - some
> earlier comments seem to imply you are doing a connect/disconnect cycle
> for every query ist that actually true?
>
>
> Stefan
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux