On Sun, Oct 02, 2005 at 11:27:29AM -0400, Tom Lane wrote: > Steve Manes <smanes@xxxxxxxxxx> writes: > > Questions: is there a hard limit to the number of schemas you could have > > in a database? > > No. > > > Are there any caveats/pitfalls/pitbulls to having a > > large number of duplicate schemas in a database? > > If that also implies a large number of tables, you might start to run > into filesystem-level bottlenecks due to having a large number of files > in the same directory. If you aren't using a filesystem that copes > gracefully with huge directories, you probably want to avoid having more > than a few thousand files per directory. (As of PG 8.0 you can work > around this to some extent by segregating tables into different > tablespaces.) Some of the "\ commands" in psql will also get slow. I've seen this for a database with 4000 tables. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@xxxxxxxxxxxxx Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match