Folks, first of all: - I used a fixed reference value just to simplify the case analyzing and isolate it as max as possible, of course during my tests all values are random :-) - final goal of the test is to analyze scalability, so yes, concurrent sessions with random keys are growing from 1 to 256 (I run it on 32cores server, no think time, just stressing), and the result is still not yet better comparing to InnoDB - I'm analyzing this query running in memory to understand what's blocking while all main bottlenecks are avoided (no I/O anymore nor network, etc.) - initial explain analyze and table details were posted in the first message Now, let's go more further: - so "as it" query execution took 1.50ms - after removing "order by" it took 1.19ms - select count(*) instead of columns and with removed "order by" took 0.98ms - execute of the same prepared "select count(*) ..." took 0.68ms So, where the time is going?... Rgds, -Dimitri On 5/6/09, Ries van Twisk <pg@xxxxxxxxxx> wrote: > > On May 6, 2009, at 7:53 AM, Richard Huxton wrote: > >> Dimitri wrote: >>> I'll try to answer all mails at once :-)) >>> - query is running fully in RAM, no I/O, no network, only CPU time >>> - looping 100 times the same query gives 132ms total time (~1.32ms >>> per >>> query), while it's 44ms on InnoDB (~0.44ms per query) >> >> Well, assuming you're happy that PG is tuned reasonably for your >> machine and that MySQL's query cache isn't returning the results >> here it looks like MySQL is faster for this particular query. >> >> The only obvious place there could be a big gain is with the hashing >> algorithm. If you remove the ORDER BY and the query-time doesn't >> fall by much then it's the hash phase. >> >> The other thing to try is to alter the query to be a SELECT count(*) >> rather than returning rows - that will let you measure the time to >> transfer the result rows. >> >> -- >> Richard Huxton >> Archonet Ltd >> > > > Do you expect to run this query 100 times per second during your > application? > or is this just a test to see how fast the query is for optimalisation. > > I always get scared myself with such a test as 'runs out of memory', > reason > given is that usually this is not really the case in a production > environment. > > Try to make a little test case where you give the query random > parameters > so different result sets are returned. This will give you a better > idea on how > fast the query really is and might give you better comparison results. > > instead of count(*) I isusallt do explain analyze to see how fast > PostgreSQL handles to query. > > Ries > > > -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance