I don't really know what you're trying to accomplish here, but dropping and creating thousands of tables is never a good idea with any database system. You can certainly do that, just don't expect any query to run at their best performance. You'd need to at least do a vacuum before starting to query those tables. Can't you just leave the tables alone and populate them with records? Looks like a bad design to me when you have to drop/create tables as part of the regular operations. On Monday 16 January 2006 09:10, Orlando Giovanny Solarte Delgado wrote: > I am designing a system that it takes information of several databases > distributed in Interbase (RDBMS). It is a system web and each user can to > do out near 50 consultations for session. I can have simultaneously around > 100 users. Therefore I can have 5000 consultations simultaneously. Each > consultation goes join to a space component in Postgis, therefore I need to > store each consultation in PostgreSQL to be able to use all the capacity of > PostGIS. The question is if for each consultation in execution time build > a table in PostGRESQL I use it and then I erase it. Is a system efficient > this way? Is it possible to have 5000 tables in PostGRESQL? How much > performance? > > Thanks for your help! > > > > Orlando Giovanny Solarte Delgado > > Ingeniero en Electrónica y Telecomunicaciones > > Universidad del Cauca, Popayan. Colombia. > > E-mail Aux: orlandos@xxxxxxxxx -- UC -- Open Source Solutions 4U, LLC 1618 Kelly St Phone: +1 707 568 3056 Santa Rosa, CA 95401 Cell: +1 650 302 2405 United States Fax: +1 707 568 6416