On Fri, Jan 30, 2009 at 08:37:02PM +0100, durumdara wrote: > This website will supports up to 200 corporations. > The functions are similar, but because of many differents of these corps > (datas to keep, way to working them), we wanna make many databases (one > DB for one corp). Databases or schemas? they are different in PG and they have very different behaviors which suit different use cases. > The site uses one main database that handle all public, and shared > services, and store the links to subdatabases. Sounds like you want to use schemas. > The main viewpoints: > - quick (re)connect - because mod_python basically not store the > database connections persistently ?? mod_python can be used to do persistent connections as well as connection pooling. > - fast queries They're as fast as you write them! PG generally does better with higher concurrent loads than MySQL (never used DBISAM or FireBird). > - the db server can handle many databases (webdb_nnn where nnn is an > integer) There's a summary of limits in the about page: http://www.postgresql.org/about/ > - I can add/modify a table, or a field to a table without "full lock" > on the table (like DBISAM restructure). Like in FireBird, where the "add > field" change only the table description. I don't know that PG supports > this way of the DB modifying. Yes; have a read about MVCC: http://www.postgresql.org/docs/current/static/mvcc.html > - Quick and easy backup/restore system pg_dump works well, how quick it is depends on the amount of data you have obviously > Another important thing that I don't understand (what as I saw) that the > PostGreSQL is store the databases in one, unseparatable file set, in a > directory named data. > In another databases, like DBISAM, FireBird, MySQL, the databases are > separated to another directories/files. PG's scheme works well; one file per table doesn't work very well in practice. there will always be links between tables (foreign keys being the obvious one) and hence selectively restoring individual tables on a file by file basis seems like an *amazing* way of killing your database. > This "one datadir" is seems to be not too good for us. We used DBISAM in > our clients, and many times when we got some filesystem error, we can > simply recover the tables - from the files. Use a proper backup system that takes consistent snapshots of your data, anything else will come back and bite you when you really don't want it to > I very fear from to keep all databases in one place, because if they are > corrupted, possible more of them injured (if they are not separated). > I cannot make filesystem based (hard) copy from one db (only SQL dump > enabled). There are much better tools available to solve things than treating tables as files! As far as backups go, there are several options of varying complexity and resilience. pg_dump is simple, you can do filesystem level backups with a bit of fiddling, you can keep another database fed with data from the main one with varying levels of lag. http://www.postgresql.org/docs/current/static/backup.html -- Sam http://samason.me.uk/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general