I have been thinking about an alternate means of implementing global sequences that I feel would simplify things. Rather than chunking out blocks, set an increment value for each sequence equal to the number of nodes in the "cluster". Each node has an offset. So, if you have 10 nodes, mode 1 has offset 0 and node 10 has offset 9. The first time a nextval is requested on a sequence after starting up, it makes certain that its value is set where val mod 10 is equal to the nodes offset. If not, it increments up to that value. From that point forward, sequences are incremented by 10 each time. This would work even if you added new nodes, and you could add intelligence to support setting proper initial sequence values when tables are altered. I may be overlooking something; but, it seems like a fairly simple solution that would work. On Jan 7, 2015, at 8:25 AM, Craig Ringer <craig@xxxxxxxxxxxxxxx> wrote: > On 01/04/2015 12:33 AM, John Casey wrote: > While attempting to alter a table to add a global sequence as a primary > key using the following commands: > > CREATE SEQUENCE my_table_id_seq USING bdr; > > ALTER TABLE my_table > > ADD COLUMN id integer PRIMARY KEY NOT NULL DEFAULT > nextval('my_table_id_seq'::regclass); > I started to notice some issues that concerned me. In order to create > the sequence, I had to have the replication running. To alter the table, > I had to stop replication. The only way I really knew how to do this was > to remove the bdr properties in the postgres configuration file and > restart. At that point, I executed the ALTER TABLE code, when it got to > 15000 records, Postgres informed me that I needed to turn replication > back on so the nodes could agree on additional sequence allocations. In general global sequences don't play well with full table rewrites. That's why BDR prevents the full table rewrite. What you need to do is do the ALTER without the NOT NULL DEFAULT. Then ALTER to add the DEFAULT so new rows get it. Now UPDATE the table in chunks to allocate IDs where they're null. Finally, once it's fully populated, ALTER it to add the NOT NULL DEFAULT (...) . To get rid of the need for this it'd have to be possible to allow blocking nextval(..) on global sequences, which for internal reasons is way more complicated than you might expect. > When I turned it back on, it just kind-of wigged out. Disabling replication during sequence voting isn't something that's been specifically tested for. Some details on "wigged out" would be useful, though. In general, once BDR is active it's not a great idea to disable it, make changes, then re-activate it on a database. > So, how is this supposed to work? As above - create the sequence, populate IDs in batches, then set the default and not-null constraint at the end. > In addition, what happens when you > have very disparate databases that are both updated often and connected > occasionally (which is what we have). Will it quit doing inserts until > it is connected to the other databases again? That would be really bad. If you're relying on global sequences and your write rates are fairly high but your databases are only intermittently connected then yes, you're probably going to have times where you run out of allocated sequence values. You may want to use UUID keys instead, or one of the other conventional approaches. Down the track some more control over global sequences might be possible - controlling how early new chunks are allocated, how big the chunks are, etc. At the moment it's all pretty fixed, and it's really suited to systems where they're connected most of the time. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general