On Fri, 20 Mar 2009, Will Rutherdale (rutherw) wrote:
However, keeping the KISS principle in mind, you can create a benchmark that simply sets up a sample database and forks off a bunch of processes to do random updates for an hour, say. Dead simple.
There's a benchmark tool that does something like this that comes with PostgreSQL named pgbench. A MySQL-oriented tool named sysbench also can do that, and it supports running against PostgreSQL as well--badly though, so it's hard to use that to do a fair comparison.
Simple benchmarks tend to measure only one thing though, and it's often not what you think you're measuring. For example, pgbench produces a transactions/per second number. It's useful for comparing the relative performance between two PostgreSQL instances, and people think it gives you an idea of transactional performance. What the actual magnitude of the result measures in many cases is instead how well the generated data set fits in cache.
If you're doing something update heavy, a lot of the time what you actually will measure is how fast your disk can seek, process a disk commit done using fsync, or some combination of the two. If you're not careful to make sure you're using the same level of disk commit guarantee on both installations, it's real easy to get bad benchmark results here. The intro to that subject from the PostgreSQL perspective is at http://www.postgresql.org/docs/8.3/static/wal-reliability.html
On MySQL, the parameters that controls this behavior are described starting at http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
For something with lots of disk commits, it's critical that you have both systems configured identically here.
-- * Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general