On Wed, Jan 15, 2014 at 6:07 PM, Tirthankar Barari <tbarari@xxxxxxxxxxx> wrote: > My tables are: > > table test_metric ( > id varchar(255) not null, // just auto generated uuid from app > timestamp timestamp not null, > version int4, > entity_id varchar(255) not null, > primary key (id, timestamp) > ); > Indexes: > "test_metric_pkey1" PRIMARY KEY, btree (id) > "test_metric_entity_id_timestamp_key" UNIQUE CONSTRAINT, btree > (entity_id, "timestamp") > > AND > > table test_metric_metrics ( > metric_id varchar(255) not null, > metrics float8, > metrics_key varchar(255) not null, > primary key (metric_id, metrics_key) > ); > Indexes: > "test_metric_metrics_pkey" PRIMARY KEY, btree (metric_id, metrics_key) > Foreign-key constraints: > "fk3b8e13abb63406d5" FOREIGN KEY (metric_id) REFERENCES test_metric(id) > > Basically, test_metric holds the timestamp and some metadata and > test_metric_metrics holds the set of key/value pairs for the give entity and > timestamp in the parent table. > > Is it possible to partition the second table by timestamp field from first > table? mmmm... I think not. Although you could copy the timestamp field into the second table, but then things will begin to get hairy, as you'll need to maintain redirection rules for insertions on both tables, and I do not know from memory which are the rules when mixing foreign keys and inheritance, and will need to test them anyway to convince myself it works. To dessign a solution for these, knowledge of the application and the access patterns is needed. I just wanted to point that for maintaining rolling logs of data inheritance plus some logic can greatly enhance your performance, as you do not need deletes, just table droppings which are normally faster. I do it to maintain call records, but I have logic for my particular access pattern, which creates partitions on demand and just uses copy against the relevant partition exploiting timestamp locality and invariance ( cdrs are normally inserted for recent calls, and timestamps do not normally change ) and it has made maintenance much easier ( as I just coalesce old paritions into unheriting archive tables and drop them ). You maybe able to dessign something like this, but if you begin to have complex relatinoships, probably the way to go until you explore the relationships is to just cranck up the deleting / vacuuming frequency ( many small delete / vacuum normally lead to more total time but less impact on db performance, as it has been pointed previously ), and, if the tables use the classical pattern for log tables ( insert at one end, delete at the other, extremely rare updates ) it will probably perform better if excluded from autovacuum and vacuumed explictly after deletion batches, or just periodically ( as inserting does not leave many oportunities to recover space, but you may want to analyze if vaccum is set infrequently ). Francisco Olarte. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general