Re: Big table and partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/14/23 15:46, Rajesh Kumar wrote:
Hi

In my env, I have a table 30GB

That's not necessarily huge.

where data is getting inserted directly whenever the new block is created in block chain...

How many records for that 30GB?

How many days (or months)?

Will the rate of insertion increase?

What retention is required?

Sometimes, there is a query running to check whether the recent block is created ?

I have been asked to do partition, i said manager since we are not at all using the past data, partition is not required. Instead we can stop the sync (downtime not required), change the name of table , create a new table with same structure with original name and keep only tis month data. Then sync the blockchain so that new data will come to newly created table and old data will be there simply as a storage as we r not selecting any query from old data...

I've been burned by inefficient queries when having to add a date column to the primary key.  We "departitioned" every table except the two that had large bytea columns and stored up to 200GB per month.


Note: on a longer run, data will keep on coming in whenever new block is created.

Is that right ? Any suggestions ? Better options ?

In my experience, Postgresql deletes based on an indexed date field are pretty efficient.  Sometimes I delete a month of data at once, and sometimes one day of data at a time.


--
Born in Arizona, moved to Babylonia.





[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux