On Mon, 15 Apr 2024 at 02:25, jack <jack4pg@xxxxxxx> wrote: > Then I tried to update the data in blocks of 100,000 records and it crashed 4 times on 4 different blocks. > So I updated the first crashed block down to the a block of 10 records, until it crashed. > Then I updated each of the 10 records individually to identify the record that is problemantic, but then all 10 records updated without crashing! > Pure insanity! > I have been working on this for just over 1 year now, documenting every step, and I am still unable to get this to work without it crashing somewhere along the way. > I am beginning to wonder if postgreSQL is bi-polar. There are two things I can think of that might make PostgreSQL appear "bi-polar" here. 1) The plan isn't the same when you update a small number of rows vs a large number rows. The planner may prefer an index depending on what's in your WHERE clause. 2) JIT compilation is less likely with a small number or rows as the plan will come out cheaper. A stack trace as mentioned by Tom would be ideal, but if you can't get that... I'd try: a) Disable "jit" in postgresql.conf and reload the config file with SELECT pg_reload_conf(); b) Still crashing? Try dropping all indexes before performing the UPDATE. c) Still crashing? Can you make it consistently crash with one problematic CSV file and UPDATE statement? Does it still crash if you drop all indexes from the table before performing the UPDATE? Can you narrow the UPDATE's scope by forcing a TID Range scan by starting with a WHERE clause such as: WHERE ctid >= '(0,0)' AND ctid <= '(4294967294,0)'; then increasly try to home in on the problematic value by "binary searching" for the problematic record. You can get the number of blocks in the table with: select pg_relation_size('your_table'::regclass) / current_setting('block_size')::int; replace the 4294967294 with the value from that and bisect the table until you home in on the block with the issue. David