We have a large database system designed around partitioning. Our application is characterized with - terabytes of data - billions of rows in dozens of base tables (and 100s of paritions) - 24x7 insert load of new data that cannot be stopped, data is time sensitive. - periodic reports that can have long running queries with query times measured in hours We have 2 classes of "maintenance" activities that are causing us problems: - periodically we need to change an insert rule on a view to point to a different partition. - periodically we need to delete data that is no longer needed. Performed via truncate. Under both these circumstances (truncate and create / replace rule) the locking behaviour of these commands can cause locking problems for us. The scenario is best illustrated as a series of steps: 1- long running report is running on view 2- continuous inserters into view into a table via a rule 3- truncate or rule change occurs, taking an exclusive lock. Must wait for #1 to finish. 4- new reports and inserters must now wait for #3. 5- now everyone is waiting for a single query in #1. Results in loss of insert data granularity (important for our application). Would like to understand the implications of changing postgres' code/locking for rule changes and truncate to not require locking out select statements? The following is a simplified schema to help illustrate the problem. create table a_1 ( pkey int primary key ); create table a_2 ( pkey int primary key ); create view a as select * from a_1 union all select * from a_2; create function change_rule(int) returns void as ' begin execute ''create or replace rule insert as on insert to a do instead insert into a_''||$1||''(pkey) values(NEW.pkey)''; end; ' language plpgsql; -- change rule, execute something like the following periodically select change_rule(1); We've looked at the code and the rule changes appear "easy" but we are concerned about the required changes for truncate. Thanks Marc