Search Postgresql Archives

Re: More grist for the PostgreSQL vs MySQL mill

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2007-01-22 at 10:30 +0800, Shashank Tripathi wrote:
> The problem is when the number of rows exceeds 30 million, MySQL
> performance degrades substantially. For most people, this is not an
> issue. PG is solid with huge databases, but in my experience, even the
> most optimized subselect on PG will not return a value in 0.008
> seconds  on 10 million rows -- I'd appreciate other experiences.
> 

A quick test of an IN query against 2 10M record tables gets a result in
less than 1/10th of the 8ms you're asking for.

Regards,
	Jeff Davis

Output from psql (with some editing):

db02=> CREATE TABLE a(x int unique, y int);
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "a_x_key" for
table "a"
CREATE TABLE
Time: 26.679 ms
db02=> CREATE TABLE b(z int unique, x int);
NOTICE:  CREATE TABLE / UNIQUE will create implicit index "b_z_key" for
table "b"
CREATE TABLE
Time: 6.419 ms
db02=> insert into a select generate_series, generate_series+1 from
generate_series(1,10000000);
drop table foo;
INSERT 0 10000000
Time: 74483.276 ms
db02=> insert into b select generate_series, generate_series+1 from
generate_series(1,10000000);
INSERT 0 10000000
Time: 75848.143 ms
db02=> analyze a;
ANALYZE
Time: 119.965 ms
db02=> analyze c;
ERROR:  relation "c" does not exist
db02=> analyze b;
ANALYZE
Time: 902.873 ms
db02=> select y from a where x in (select x from b where z in
(1000000,5000000));
    y
---------
 1000002
 5000002
(2 rows)

Time: 0.384 ms





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux