Hi Olivier, First question I'd like to ask is: will this benchmark and its results will be accessible on the net when you'll have finished ? I'm interested about your benchmark and your results. > I'm running a benchmark with theses 3 databases, and the first results > are not very good for PostgreSQL. Hope I can give you hints to enhance PostgreSQL's performances in your benchmark. > PostgreSQL is 20% less performance than MySQL (InnoDB tables) I think MySQL's tuning is comparable to PostgreSQL's? > My benchmark uses the same server for theses 3 databases : > Dell Power edge - Xeon 2.8 Ghz - 2 Go Ram - 3 SCSI disks - Debian > Sarge - Linux 2.6 ok. 3 disks is really few for a database server IMHO (more disks, better I/O *if* you span database files onto disks). > The transactions are a random mix of request in read (select) and > write (insert, delete, update) on many tables about 100 000 to 15 000 > 000 rows. ok. But.. What's the size of your database ? [see it in psql with: select pg_size_pretty(pg_database_size('myDatabase');] > Transactions are executed from 500 connections. You mean its a progressive test (1, 10, 100, 400, 500..???) or 500 from the very beggining ? > For the tunning of PostgreSQL i use official documentation and theses > web sites : > > http://www.revsys.com/writings/postgresql-performance.html > http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html Those pages are great if you want to reach to a great postgresql.conf. > Some important points of my postgresql.conf file : > > max_connections = 510 > shared_buffer = 16384 > max_prepared_transactions = 510 why? whats the point putting 510 here? > work_mem = 1024 I found that value really low. But you'll have to check if you need more. Thats all about looking for temporary files creation under $PGDATA. > maintenance_work_mem = 1024 This has to be increased dramatically, I really reccomend you read this page too: http://www.powerpostgresql.com/PerfList/ > fsync = off Thats pretty unsecure for a production database. I don't think it is good to test PostgreSQL with fsync off, since this won't reflect the final configuration of a production server. > wal_buffers = 32 A great value would be 64. Some tests already concluded that 64 is a good value for large databases. You'll *have to* move $PGDATA/pg_xlog/ too (see end of this mail). > commit_delay = 500 > checkpoint_segments = 10 Put something larger than that. I use often use like 64 for large databases. > checkpoint_timeout = 300 > checkpoint_warning = 0 > effective_cache_size = 165 000 Try 174762 (2/3 the ram installed). Wont be a great enhance, for sure, but let's put reccomended values. > autovaccuum = on Thats a critic point. Personaly I dont use autovacuum. Because I just don't want a vacuum to be started ... when the server is loaded :) I prefer control vacuum process, when its possible (if its not, autovacuum is the best choice!), for example, a nighlty vacuum... A question for you: after setting up your test database, did you launch a vacuum full analyze of it ? > default_transaction_isolation = 'read_committed' > What do you think of my tunning ? IMHO, it is fairly good, since you put already somewhat good values. Try too to set "max_fsm_pages" depending what PostgreSQL tells you in the logfile... (see again http://www.powerpostgresql.com/PerfList/) With XEON, you have to lower "random_page_cost" to 3 too. You don't mention files organisation ($PGDATA, the PG "cluster") of your server? I mean, it is now well known that you *have to* move pg_xlog/ directory to another (array of) disk! Because otherwise its the same disk head that writes into WALs _and_ into files... OTOH you are using "fsync=off", that any DBA wouldn't reccomend.. Well, ok, it's for testing purposes. Same remark, if you can create tablespaces to span database files accross (array of) disks, even better. But with 3 disks, its somewhat limitated: move pg_xlog before anything else. Now about "client side", I reccomend you install and use pgpool, see: http://pgpool.projects.postgresql.org/ . Because "pgpool caches the connection to PostgreSQL server to reduce the overhead to establish the connection to it". Allways good :) Hope those little hints will help you in getting the best from your PostgreSQL server. Keep us on touch, -- Jean-Paul Argudo www.PostgreSQLFr.org www.dalibo.com