> Well, there went the evidence :-( ... but what exactly did you have > to do to shut it down? I'm wondering whether the backends responded > to SIGINT or SIGTERM. Sorry :( First, I tried issuing a kill to the pid of the vacuum process. It didn't respond to kill. Then, I tried a kill -9. It responded to that :) However, postgres got very unhappy at this point - I tried to log in to issue another query against the pg_stat_activity table, but every command I issued at this point simply hung. In this case, Postgres had been started in the foreground on a terminal, so I went to that terminal, and did a ctrl-c. Eventually, postgres stopped, but the terminal wouldn't respond either - and I had to close it. > > Next time, it'd be good to confirm (with top or vmstat or similar) > whether the backends are actually idle or are eating CPU or I/O. I didn't notice any high load on the processor - the server seemed idle. But I didn't look real close a the individual postgres processes. > Also try strace'ing a few of them; the pattern of kernel calls if > any would be revealing. > > The lack of deadlock reports or 't' values in pg_stat_activity.waiting > says that you weren't blocking on heavyweight locks. It's not > impossible that there was a deadlock at the LWLock level, though. > > What sort of indexes are there on this table? Here is the DDL for the table where vacuum was trying to run: CREATE TABLE iphost ( ethernetmacaddr char(17) NOT NULL, cpemac char(11) NOT NULL, ipaddr varchar(15) NOT NULL, regtime timestamp NOT NULL, leasetime timestamp, last_updated timestamp NOT NULL DEFAULT now(), CONSTRAINT pk_iphost PRIMARY KEY (ethernetmacaddr, ipaddr), CONSTRAINT fk_iphost_cpe FOREIGN KEY (cpemac) REFERENCES cpe (cpemac) MATCH SIMPLE ON UPDATE CASCADE ON DELETE CASCADE ) WITHOUT OIDS; ALTER TABLE iphost OWNER TO pslogin; CREATE INDEX ix_iphost_cpemac ON iphost USING btree (cpemac); CREATE INDEX ix_iphost_ipaddr ON iphost USING btree (ipaddr); > Teodor just fixed > an issue in GIN indexes that involved taking an unreasonable number of > LWLocks, and if that code wasn't exposing itself to deadlock risks > I'd be pretty surprised. > > regards, tom lane Thanks. If/when I can recreate this, I'll try to gather more info. Dan