On 10/1/15 6:48 PM, Jonathan Vanasco wrote:
1. general performance at different stages of DB size. with 18 sequences, our keys/indexes are simply smaller than they'd be with 1 key. i wonder how this will impact lookups and joins.
I'm not really following here... the size of an index is determined by the number of tuples in it and the average width of each tuple. So as long as you're using the same size of data type, 18 vs 1 sequence won't change the size of your indexes.
2. managing this sequence when next scaling the db (which would probably have to be sharding, unless others have a suggestion)
Sequences are designed to be extremely fast to assign. If you ever did find a single sequence being a bottleneck, you could always start caching values in each backend. I think it'd be hard (if not impossible) to turn a single global sequence into a real bottleneck.
If you start sharding you'll need to either create a composite ID where part of the ID is a shard identifier (say, the top 8 bits), or assign IDs in ranges that are assigned to each shard. There's work being done right now to make #2 a bit easier. Probably better would be if you could shard based on something like object or customer; that way you only have to look up which shard the customer lives in.
-- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general