Hi all, I got a table with subscriptions to some kind of campaigns, and every contact can have max 1 running subscription per campaign: CREATE TABLE subscriptions ( id SERIAL NOT NULL PRIMARY KEY , campaign_id INT NOT NULL , contact_id INT NOT NULL , done BOOL NOT NULL ); CREATE UNIQUE INDEX subscriptions_running ON subscriptions ( campaign_id , contact_id ) WHERE NOT done; Now I want to add, say, 100_000 contacts to a subscription. The contacts may or may not already have a subscription, I want to simply ignore the ones which already have a not-done subscription. I begin a transaction, and loop over the contact IDs with: INSERT INTO subscriptions (campaign_id, contact_id, done) VALUES ($1, $2, FALSE) ON CONFLICT (campaign_id, contact_id) WHERE NOT done DO NOTHING RETURNING id; This does what it should do, so in that sense it's fine :) But it's still a network roundtrip per statement, and it takes a while to run. How about doing it in a single query, sending all contacts_ids at once: WITH ids AS (SELECT unnest($2::int[]) AS contact_id) INSERT INTO subscriptions (campaign_id, contact_id, done) SELECT $1, ids.contact_id, FALSE FROM ids ON CONFLICT (campaign_id, contact_id) WHERE NOT done DO NOTHING RETURNING id; Where $2 is an array of integers. This is a single query, and it behaves the same as the loop above. And it's indeed faster. Are there any known problems with this strategy? Are they any other methods of inserting lots of records in a nicer way? Alternatives I know of, and the only options I could find documented: - create a huge custom insert statement - use COPY, but that doesn't work with 'ON CONFLICT' as far as I can see Any thoughts? Thanks! Harmen