On 13.4.2012 18:14, xlash wrote: > I have a big table ~15M records called entities. I want to find top 5 > entities matching "hockey" in their name. Number of rows in not a very useful metric - if might be 15 MBs or 15 GBs, depending on the structure. We need to know at least this: select relpages, reltuples from pg_class where relname = 'entities'; and similarly for the index (just replace the relation name). > QUERY PLAN > > Limit (cost=4447.28..4447.29 rows=5 width=3116) (actual > time=18509.274..18509.282 rows=5 loops=1) > -> Sort (cost=4447.28..4448.41 rows=2248 width=3116) (actual > time=18509.271..18509.273 rows=5 loops=1) > Sort Key: (ts_rank(to_tsvector('english'::regconfig, (name)::text), > '''hockey'''::tsquery)) > Sort Method: top-N heapsort Memory: 19kB > -> Bitmap Heap Scan on entities (cost=43.31..4439.82 rows=2248 > width=3116) (actual time=119.003..18491.408 rows=2533 loops=1) > Recheck Cond: (to_tsvector('english'::regconfig, (name)::text) @@ > '''hockey'''::tsquery) > Filter: (NOT place) > -> Bitmap Index Scan on gin_ix_entity_full_text_search_name > (cost=0.00..43.20 rows=2266 width=0) (actual time=74.093..74.093 rows=2593 > loops=1) > Index Cond: (to_tsvector('english'::regconfig, > (name)::text) @@ '''hockey'''::tsquery) > Total runtime: 18509.381 ms > (10 rows) I recommend services like explain.depesz.com instead of posting the plans directly (which causes wrapping etc. so the plans are barely readable). I've posted the plan here: http://explain.depesz.com/s/Jr7 > Is it because of my boolean condition (not Place?) If so, I should add it to > my index and I should get a very fast query? Or is it the sorting condition > which makes it very long? No. The step that consumes most of the time is the "bitmap heap scan" (see the "actual time" difference, it's nicely visible from the plan at explain.depesz.com). Let me briefly explain how the bitmap index scan works (and how that applies to this query). First, the index is scanned and a bitmap of pages (tuples) that need to be read from the relation is built. This is the "bitmap index scan" node in the plan. Then, the bitmap is used to read pages from the relation - this is necessary to check visibility of the rows (this is not stored in the index) and get the complete rows if needed. If you check the plan you'll see the first stage takes 74 ms (so it's negligible) but scanning the relation takes 18491 ms (like 99.9% of the runtime). The sorting clearly is not the culprit as it takes ~ 17 ms. And the 'NOT place' condition does not make much difference - the bitmap index scan returns 2593 rows and the recheck produces 2533 rows, so ~2% rows were removed (not necessarily due to the 'NOT place' condition). So it's highly unlikely adding this column to the index will improve the performance. So my guess is that you're I/O bound - reading the table causes so much random I/O the machine can't handle that. You can verify this by watching "iostat -x" when running the query. My bet is the device will be ~100% utilized. > > Thanks helping me understand the Query plan and how to fix my 25 seconds > query! > > > Here are my DB parameters . It is an online DB hosted by Heroku, on Amazon > services. They describe it as having 1.7GB of ram, 1 processing unit and a > DB of max 1TB. Well, AWS instances are known to have I/O issues (which is expected when running database with virtualized devices). Now sure what to recommend here - either use an instance with more RAM (so that the whole 'entities' relation is cached) or split the table somehow so that less data needs to be read from the disk. E.g. vertical partitioning i.e. splitting the table vertically into two parts or something like that. Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance