Well, what does the random_page_cost do internally ? I don't think I'd expect postgres to be able to *do* anything in particular, any more than I would expect it to "do" something about slow disk I/O or having limited cache. But it might be useful to the EXPLAIN ANALYZE in estimating costs of retrieving such data. Admittedly, this is not as clear as wanting a sequential scan in preference to indexed reads when there are either very few rows or a huge number, but it strikes me as useful to me the DBA to have this factoid thrust in front of me when considering why a given query is slower than I might like. Perhaps an added time based on this factor and the random_page_cost value, since lots of TOAST data and a high access time would indicate to my (ignorant!) mind that retrieval would be slower, especially over large data sets. Forgive my ignorance ... obviously I am but a humble user. grin. G -----Original Message----- From: Tom Lane [mailto:tgl@xxxxxxxxxxxxx] Sent: Wed 12/14/2005 9:36 PM To: Gregory S. Williamson Cc: pgsql-performance@xxxxxxxxxxxxxx; PostGIS Users Discussion Subject: Re: [PERFORM] [postgis-users] Is my query planner failing me, or vice versa? "Gregory S. Williamson" <gsw@xxxxxxxxxxxxxxxx> writes: > Forgive the cross-posting, but I found myself wondering if might not > be some way future way of telling the planner that a given table > (column ?) has a high likelyhood of being TOASTed. What would you expect the planner to do with the information, exactly? We could certainly cause ANALYZE to record some estimate of this, but I'm not too clear on what happens after that... regards, tom lane !DSPAM:43a100d6285261205920220!