Hi. I have a performance problem with this simple query: SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id) WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1; ---------------------------------------- Limit (cost=0.00..2026.57 rows=1 width=8) (actual time=5828.681..5828.681 rows=0 loops=1) -> Nested Loop (cost=0.00..901822.69 rows=445 width=8) (actual time=5828.660..5828.660 rows=0 loops=1) -> Index Scan using n_traffic_collect_time_login_id on n_traffic (cost=0.00..10280.58 rows=281608 width=12) (actual time=0.026..1080.405 rows=281655 loops=1) -> Index Scan using n_logins_pkey on n_logins (cost=0.00..3.15 rows=1 width=4) (actual time=0.011..0.011 rows=0 loops=281655) Index Cond: ("outer".login_id = n_logins.login_id) Filter: (account_id = 1655) Total runtime: 5828.963 ms (7 rows) This looks very argly... I done some hack and change the JOIN n_logins USING (login_id) WHERE n_logins.account_id = '1655' with the WHERE (login_id = '1' OR login_id = '2'...) in script which prepare this ...OR...OR... list and form this query. But THIS IS DIRTY HACK! There must be gooder way... Please, help, explain! If there is only one login_id сorresponds to some account_id the query goes fast: =# explain analyze SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id) WHERE n_logins.account_id= '15' ORDER BY collect_time LIMIT 1; --------------------------------------------------------- Limit (cost=1262.93..1262.94 rows=1 width=8) (actual time=61.884..61.886 rows=1 loops=1) -> Sort (cost=1262.93..1263.49 rows=223 width=8) (actual time=61.867..61.867 rows=1 loops=1) Sort Key: n_traffic.collect_time -> Nested Loop (cost=5.60..1254.23 rows=223 width=8) (actual time=3.657..36.890 rows=4536 loops=1) -> Index Scan using n_logins_account_id on n_logins (cost=0.00..3.22 rows=1 width=4) (actual time=0.032..0.049 rows=1 loops=1) Index Cond: (account_id = 15) -> Bitmap Heap Scan on n_traffic (cost=5.60..1241.72 rows=743 width=12) (actual time=3.601..19.012 rows=4536 loops=1) Recheck Cond: (n_traffic.login_id = "outer".login_id) -> Bitmap Index Scan on n_traffic_login_id (cost=0.00..5.60 rows=743 width=0) (actual time=3.129..3.129 rows=4536 loops=1) Index Cond: (n_traffic.login_id = "outer".login_id) Total runtime: 63.697 ms (11 rows) Tables: =# \d n_traffic Table "public.n_traffic" Column | Type | Modifiers --------------+-----------------------------+------------------------------ login_id | integer | not null traftype_id | integer | not null collect_time | timestamp without time zone | not null default now() bytes_in | bigint | not null default (0)::bigint bytes_out | bigint | not null default (0)::bigint Indexes: "n_traffic_login_id_key" UNIQUE, btree (login_id, traftype_id, collect_time) "n_traffic_collect_time" btree (collect_time) "n_traffic_collect_time_login_id" btree (collect_time, login_id) "n_traffic_login_id" btree (login_id) "n_traffic_login_id_collect_time" btree (login_id, collect_time) Foreign-key constraints: "n_traffic_login_id_fkey" FOREIGN KEY (login_id) REFERENCES n_logins(login_id) ON UPDATE CASCADE "n_traffic_traftype_id_fkey" FOREIGN KEY (traftype_id) REFERENCES n_traftypes(traftype_id) ON UPDATE CASCADE =# \d n_logins Table "public.n_logins" Column | Type | Modifiers ------------+------------------------+--------------------------------------------------- login_id | integer | not null default nextval('n_logins_login_id_seq'::regclass) account_id | integer | not null login | character varying(255) | not null pwd | character varying(128) | Indexes: "n_logins_pkey" PRIMARY KEY, btree (login_id) "n_logins_login_key" UNIQUE, btree ("login") "n_logins_account_id" btree (account_id) Foreign-key constraints: "n_logins_account_id_fkey" FOREIGN KEY (account_id) REFERENCES n_accounts(account_id) Triggers: tr_after_n_logins AFTER INSERT OR DELETE OR UPDATE ON n_logins FOR EACH ROW EXECUTE PROCEDURE tr_f_after_n_logins() tr_before_n_logins BEFORE UPDATE ON n_logins FOR EACH ROW EXECUTE PROCEDURE tr_f_before_n_logins() My machine is 2xPIII 1 GHz + 1 Gb RAM + RAID5 on 6 SCSI disks. My settings is: max_connections = 50 shared_buffers = 32768 temp_buffers = 57792 work_mem = 81920 maintenance_work_mem = 196608 max_fsm_pages = 262144 max_fsm_relations = 1000 wal_buffers = 64 checkpoint_segments = 4 checkpoint_timeout = 300 checkpoint_warning = 30 effective_cache_size = 6553 random_page_cost = 3 default_statistics_target = 800 log_rotation_age = 1440 log_line_prefix = '%t %u@%d ' stats_start_collector = on stats_command_string = on stats_block_level = on stats_row_level = on stats_reset_on_server_start = on --- # ipcs -T msginfo: msgmax: 16384 (max characters in a message) msgmni: 40 (# of message queues) msgmnb: 2048 (max characters in a message queue) msgtql: 40 (max # of messages in system) msgssz: 8 (size of a message segment) msgseg: 2048 (# of message segments in system) shminfo: shmmax: 838860800 (max shared memory segment size) shmmin: 1 (min shared memory segment size) shmmni: 128 (max number of shared memory identifiers) shmseg: 128 (max shared memory segments per process) shmall: 204800 (max amount of shared memory in pages) seminfo: semmni: 256 (# of semaphore identifiers) semmns: 2048 (# of semaphores in system) semmnu: 256 (# of undo structures in system) semmsl: 60 (max # of semaphores per id) semopm: 100 (max # of operations per semop call) semume: 10 (max # of undo entries per process) semusz: 100 (size in bytes of undo structure) semvmx: 32767 (semaphore maximum value) semaem: 16384 (adjust on exit max value) --- kern.seminfo.semmni=256 kern.seminfo.semmns=2048 kern.seminfo.semmnu=256 --- postgresql have its own login class: postgresql:\ :openfiles-cur=768:\ :tc=daemon: daemon:\ :ignorenologin:\ :datasize=infinity:\ :maxproc=infinity:\ :openfiles-cur=128:\ :stacksize-cur=8M:\ :localcipher=blowfish,8:\ :tc=default: -- engineer