Casey Duncan <casey@xxxxxxxxxxx> writes: > I was also trying to figure out how big the sample really is. Does a > stats target of 1000 mean 1000 rows sampled? No. From memory, the sample size is 300 times the stats target (eg, 3000 rows sampled for the default target of 10). This is based on some math that says that's enough for a high probability of getting good histogram estimates. Unfortunately that math promises nothing about n_distinct. The information we've seen says that the only statistically reliable way to arrive at an accurate n_distinct estimate is to examine most of the table :-(. Which seems infeasible for extremely large tables, which is exactly where the problem is worst. Marginal increases in the sample size seem unlikely to help much ... as indeed your experiment shows. We could also diddle the estimator equation to inflate the estimate. I'm not sure whether such a cure would be worse than the disease, but certainly the current code was not given to us on stone tablets. IIRC I picked an equation out of the literature partially on the basis of it being simple and fairly cheap to compute... regards, tom lane