On Fri, Jul 11, 2008 at 5:53 PM, Simon Riggs <simon@xxxxxxxxxxxxxxx> wrote: > > On Fri, 2008-07-11 at 15:25 -0600, Scott Marlowe wrote: >> On Fri, Jul 11, 2008 at 3:24 PM, Simon Riggs <simon@xxxxxxxxxxxxxxx> wrote: >> > >> > On Fri, 2008-07-11 at 14:51 -0600, Scott Marlowe wrote: >> > >> >> I would kindly disagree. I'm looking at a project where HOT updates >> >> are going to be a real performance enhancement, but I'll have to >> >> create a hundred or so tables ALL with fillfactor tacked on the end. >> > >> > You clearly think that adjusting fillfactor helps in all cases with HOT. >> > I disagree with that, else would have pushed earlier for exactly what >> > you suggest. In fact, I've has this exact discussion previously. >> >> How odd, because that's clearly NOT what I said. In fact I used the >> single "a" to describe the project I was looking at where having a >> default table fill factor of < 100 would be very useful. OTOH, I have >> stats databases that have only insert and drop child tables that would >> not benefit from < 100 fill factor. For a heavily updated database, >> where most of the updates will NOT be on indexed columns, as the ONE >> project I'm looking at, a default fill factor would be quite a time >> saver. > > I apologise if my phrasing sounded confrontational. > > For specific workloads, tuning of particular tables can be effective, > > I have not heard of evidence that setting fillfactor < 100 helps as an > across-the-board tuning measure on longer-term tests of performance. > Theoretically, it makes little sense, but current theory is not always > right. Until we have even hear-say evidence of benefit, introducing a > parameter would be inadvisable, IMHO. I will change that view in an > instant, with reasonable evidence. Ok, here's my scenario. We have a content management / calendaring / social networking website. The tables in this db fall into two categories, and that's either small lookup tables like a list of states which are seldom updated, or fairly large tables with lots of data that are updated constantly. There are literally several hundred medium to large tables that are updated constantly. There are a dozen or so lookup tables, which are small. Now, if I had a default fill factor of 90%, I doubt you could detect a performance slow down on the smaller tables. But I'm quite sure we'll see a difference on the large heavily updated tables. Now, I've got about 20Gigs of data to migrate from 8.1 to 8.3 and I'll be dumping the schema and data separately for this one. So I can go in and edit every single create table and create index portion of the schema to set a fill factor. It would be so much easier to group my tables by type and set a default fill factor. It's not like there isn't already a default fill factor, there IS. It's just hard coded into pgsql. I can't see where having a knob exposed to change that would be a particularly bad thing. Can't be more of a foot gun than work_mem.