On Mon, Sep 26, 2011 at 18:59, Andy Chambers <achambers@xxxxxxxx> wrote: > ...and I want to select the data from that table, adding a "group_id" > column, and a "record_id" column. The "group_id" would start at 1, > and increment by 1 every 100 rows, and the "record_id" would increment > by 1 every row, but restart at 1 for each new group_id. I can't see why you would want this, but regardless... Since you didn't list a primary key for the table, I'm using "ctid" as a substitute. But that's a PostgreSQL-specific hack, normally you would use the real primary key instead. update addresses set group_id=(nr/100)+1, record_id=(nr%100)+1 from (select ctid, row_number() over () -1 as nr from addresses) as subq where subq.ctid=addresses.ctid; This isn't going to be fast for a large table as it will effectively make 3 passes over the table, but it will get the job done. If you want the numbers to be ordered by certain columns, you'd use row_number() OVER (ORDER BY col1, col2) Regards, Marti -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general