David Garamond <lists@xxxxxxxxxxxxxxxxxxxxx> writes: > Merlin Moncure wrote: > > 6. for large tables, you can get a pretty accurate count by doing: > > select count(*) * 10 from t where random() > .9; > > on my setup, this shaved about 15% off of the counting time...YMMV. > > That's an interesting idea, using sampling to get an estimate. It's an interesting idea but this particular implementation isn't going to save any time. It still has to read every record only now it has to spend extra time doing a random() and the arithmetic. In order for sampling to speed things up you would have to use an index to actually reduce the number of records read. The database could be clever and implement the same kind of sampling vacuum does. That picks a random sampling of pages from the table without using an index. But there's no way to implement the same kind of behaviour from the user-visible features. -- greg ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html