Re: max_files_per_process limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 11, 2008 at 5:10 AM, Dilek Küçük <dilekkucuk@xxxxxxxxx> wrote:>> On Mon, Nov 10, 2008 at 4:51 PM, Achilleas Mantzios> <achill@xxxxxxxxxxxxxxxxxxxxx> wrote:>>>> Στις Monday 10 November 2008 16:18:37 ο/η Dilek Küçük έγραψε:>> > Hi,>> >>> > We have a database of about 62000 tables (about 2000 tablespaces) with>> > an>> > index on each table. Postgresql version is 8.1.>> >>>>> So you have about 62000 distinct schemata in your db?>> Imagine that the average enterprise has about 200 tables max,>> and an average sized country has about 300 such companies,>> including public sector, with 62000 tables you could blindly model>> .... the whole activity of a whole country.>>>> Is this some kind of replicated data?>> Whats the story?>> Actually we had 31 distinct tables but this amounted to tens of billions of> records (streaming data from 2000 sites) per table a year, so we> horizontally partition each table into 2000 tables. This allowed us to> discard one of the indexes that we have created and freed us from periodical> cluster operations which turned out to be infeasible for a system with tight> querying constraints in terms of time.
Any chance of combining less used tables back together to reduce thenumber of them?  I'd also look at using more schemas and fewertablespaces.  Just a thought.
-- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-admin

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux