> tfinneid@xxxxxxxxxxxxxxxxxxxxx wrote: >> > tfinneid@xxxxxxxxxxxxxxxxxxxxx wrote: >> > >> >> > are a dump of Postgres's current memory allocations and could be >> >> useful in >> >> > showing if there's a memory leak causing this. >> >> >> >> The file is 20M, these are the last lines: (the first line continues >> >> unttill ff_26000) >> >> >> >> >> >> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free >> (0 >> >> chunks); 632 used >> > >> > You have 26000 partitions??? >> >> At the moment the db has 55000 partitions, and thats only a fifth of the >> complete volume the system will have in production. The reason I chose >> this solution is that a partition will be loaded with new data every >> 3-30 >> seconds, and all that will be read by up to 15 readers every time new >> data >> is available. The data will be approx 2-4TB in production in total. So >> it >> will be too slow if I put it in a single table with permanent indexes. >> >> I did a test previously, where I created 1 million partitions (without >> data) and I checked the limits of pg, so I think it should be ok. > > Clearly it's not. I does not mean my problem has anything to do with the number of partitions. It might have, or it might not, and thats the problem, the cause has not been located yet. According to the documented limits of pg, The difference could be the memory usage and wastage > for all those relcache entries and other stuff. I would reduce the > number of partitions to a more reasonable value (within the tens, most > likely) The db worked fine until it reached perhaps 30-40 thousand partitions. ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster