On 10 Srpen 2011, 16:26, Grzegorz Blinowski wrote: > Now, the query above takes about 60sec to execute; exactly: 70s for the > first run and 60s for the next runs. In my opinion this is too long: It > should take 35 s to read the whole table into RAM (assuming 100 MB/s > transfers - half the HDD benchmarked speed). With 12 GB of RAM the whole > table should be easily buffered on the operating system level. The regexp And is it really in the page cache? I'm not an expert in this field, but I'd guess no. Check if it really gets the data from cache using iostat or something like that. Use fincore to see what's really in the cache, it's available here: http://code.google.com/p/linux-ftools/ > Some performance params from postgresql.conf: > max_connections = 16 > shared_buffers = 24MB Why just 24MBs? Have you tried with more memory here, e.g. 256MB or 512MB? I'm not suggesting the whole table should fit here (seq scan uses small ring cache anyway), but 24MB is just the bare minimum to start the DB. > Database is vacuumed. Just vacuumed or compacted? The simple vacuum just marks the dead tuples as empty, it does not compact the database. So if you've done a lot of changes and then just run vacuum, it may still may occupy a lot of space on the disk. How did you get that the table size is 3.5GB? Is that the size of the raw data, have you used pg_relation_size or something else? Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance