On Tue, Jun 14, 2011 at 6:48 AM, Hanno Schlichting <hanno@xxxxxxxxxxx> wrote: > On Mon, Jun 13, 2011 at 3:27 AM, Merlin Moncure <mmoncure@xxxxxxxxx> wrote: >> I would not even consider tweaking the internal block sizes until >> you've determined there is a problem you expect you might solve by >> doing so. > > It's not a problem as such, but managing data chunks of 2000 bytes + > the hundreds of rows per object in the large_object table for 10mb > objects seems like a lot of wasted overhead, especially if the > underlying filesystem manages 32kb or 64kb blocks. My impression of > those values was that they are a bit antiquated or are tuned for > storing small variable character objects, but not anything I'd call > "binary large objects" these days. That very well may be the case, and 10mb is approaching the upper limit of what is sane to store inside the database. Still, if you're going through the trouble to adjust the setting and recompile, I'd definitely benchmark the changes and post your findings here. Point being, all else being equal, it's always better to run with stock postgres if you can manage it. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general