Les <nagylzs@xxxxxxxxx> writes: >>> If I try to select a single unused block this way: >>> explain analyze select id from media.block b where nrefs =0 limit 1 >>> then it runs for more than 10 minutes (I'm not sure how long, I cancelled >>> the query after 10 minutes). >> You might think that even so, it shouldn't take that long ... but >> indexes on UUID columns are a well known performance antipattern. > I'm aware of the problems with random UUID values. I was using this > function to create ulids from the beginning: Oh, well that would have been useful information to provide at the outset. Now that we know the index order is correlated with creation time, I wonder if it is also correlated with nrefs, in such a way that scanning in index order is disastrous because all the desired rows are at the end of the index. Also, you deny deleting any rows, but that wasn't the point. Do you ever update nrefs from zero to nonzero? That'd also leave dead entries behind in this index. If that is a routine event that is correlated with creation time, it gets easier to believe that your index could have lots of dead entries at the front. We'd still have to figure out why autovacuum is failing to clean out those entries in a timely fashion, but that seems like a plausible way for the performance problem to exist. > I can try to do VACUUM on this table, but I'm limited on resources. I think > it will take days to do this. Maybe I can try to dump the whole database > and restore it on another machine. Pretty hard to believe that dump-and-restore would be faster than VACUUM. > (Is there a way to check the number of dead rows?) I think contrib/pgstattuple might help. regards, tom lane