I must update 3M of 100M records, with tuple specific modifications. I can generate the necessary sql, but I’m wondering if files of simple update statements affecting a single row is more effective than files of a function call doing the same update given the necessary values, including where clause restrictions? The query plan set by the first should be decent for the remainder. Alternatively, would a bulk load into a table of replacement values and join info be the fastest way? Either way I can break the updates into roughly 393 transactions (7500 rows affected per tx) or 8646 transactions (350 rows per tx) if less is more in this world. I’ll be the only user during this work. OS=centos 7, 4core virtual 64G memory; pg=10.0;