to all: maybe I can be of some help here. I think schemas are definately the way to go. I've used them quite a bit, previously with an accounting application which used schemas to separate company. Right now I am in development of a major project which will use them even more extensively, following an evalution of other methods of isloating data for scalability reasons. The mammoth table approach (company id in every table) is very clean and the best approach from a academic perspective but unfortunately you can run into scalability problems if your tables are very large. An ideal SQL database would automagically do everything for you without having to worry about it (no such database exists AFAIK). The major downsides of using schemas in an application can be mitigated...for example clever use of dynamic sql functions and scripts can go a long way in helping with updates that have to run across mutiple schemas. The benifits of schemas are pretty nice, you get the performance advantages of partitioning and you can manipulate the search path for easy switching...just make sure to force a reconnect to get all those stored procedures to relearn all the tables they use if you keep your functions in a shared area (usually a good idea). Table partitioning and inheritance look good on the surface but there are a number of 'gotchas' that can hold you back from using them effectively. Foreign keys are a mess, and compelex queries can be problematic...the new constraint exclusion feature helps, but there is a way to go IMO. by the way, the \d query in psql maps to a fairly complex sql statement, you can beat it by going right to the pg catalogs if performance is an issue, or materialize your catalogs to an indexed table. If you have a lot of tables (1000s) you will definately want to prepare everything, meaning use views, procedures, and prepared statements liberally. I would also take careful note of Jim's FSM comments and keep a careful eye on that. Merlin