On Mon, Jun 20, 2011 at 10:53 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: > Sushant Sinha <sushant354@xxxxxxxxx> writes: >>> I guess you could poke the planner towards the bitmap scan by lowering >>> the random_page_cost (the default value is 4, I'd say lowering it to 2 >>> should do the trick). > >> The numbers that I gave was after setting random_page_cost = 1.0 After >> this I don't know what to do. > > I think part of the issue here is that the @@ operator is expensive, > and so evaluating it once per row is expensive, but the pg_proc.procost > setting for it doesn't adequately reflect that. You could experiment > with tweaking that setting ... In something I was testing a couple months ago, by far the biggest expense of the @@ operator in a full table scan was in crawling through the entire toast table (and not necessarily in sequential order) in order to get the tsvector data on which to apply the operator. So increasing the cost of @@ might very well be the best immediate solution, but should the cost estimation code be changed to explicitly take page reads associated with toast into account, so that cost of @@ itself and can remain a CPU based estimate rather than an amalgam of CPU and IO? Cheers, Jeff -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance