Search Postgresql Archives

Re: more anti-postgresql FUD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



alexei.vladishev@xxxxxxxxx wrote on 11.10.2006 16:54:
Do a simple test to see my point:

1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute "update test set aaa=1 where id=0;" in an endless loop

As others have pointed out, committing the data is a vital step in when testing the performance of a relational/transactional database.

What's the point of updating an infinite number of records and never committing them? Or were you running in autocommit mode? Of course MySQL will be faster if you don't have transactions. Just as a plain text file will be faster than MySQL.

You are claiming that this test does simulate the load that your applications puts on the database server. Does this mean that you never commit data when running on MySQL?

This test also proves (in my opinion) that any multi-db application when using the lowest common denominator simply won't perform equally well on all platforms. I'm pretty sure the same test would also show a very bad performance on an Oracle server. It simply ignores the basic optimization that one should do in an transactional system. (Like batching updates, committing transactions etc).

Just my 0.02€
Thomas



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux