Everything else except the one postmaster process hum along just fine. I.e.
nothing else appears to take much system resources at all. Autovac is set with
the 8.2.x default settings. Oh, and the data was ANALYZE'd after it got moved
to the new server. Here's the code in case we have something especially silly
going on:
CREATE OR REPLACE FUNCTION populate_page_view_indices()
RETURNS integer AS
$BODY$
DECLARE
v_page_view_row RECORD;
v_cindex INTEGER;
v_tindex INTEGER;
v_visit_id BIGINT;
v_get BOOLEAN;
v_row_count INTEGER;
BEGIN
RAISE NOTICE 'Populating page_view indices ...';
UPDATE visit SET status = 'H'
FROM (SELECT visit_id
FROM page_view p, visit v
WHERE p.visit_id = v.id
AND v.status = 'N'
GROUP BY visit_id
HAVING max(p.stamp) < now() - INTERVAL '1 hour') AS ready
WHERE visit.id = ready.visit_id
AND visit.status = 'N';
v_cindex := -1;
v_tindex := -1;
v_visit_id := -1;
FOR v_page_view_row IN
SELECT p.* FROM page_view p, visit v
WHERE p.visit_id = v.id
AND v.status = 'H'
ORDER BY visit_id, p.stamp LOOP
v_get := (v_page_view_row.method = 'GET');
IF (v_visit_id != v_page_view_row.visit_id) THEN
v_visit_id := v_page_view_row.visit_id;
v_tindex := 1;
v_cindex := 1;
ELSE
v_tindex := v_tindex + 1;
IF v_get THEN
v_cindex := v_cindex + 1;
END IF;
END IF;
UPDATE page_view
SET tindex = v_tindex,
cindex = CASE WHEN v_get THEN v_cindex ELSE -1 END
WHERE id = v_page_view_row.id;
END LOOP;
UPDATE visit SET status = 'I'
WHERE status = 'H';
GET DIAGNOSTICS v_row_count = ROW_COUNT;
RAISE NOTICE 'Done populating page_view indices ...';
RETURN v_row_count;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION populate_page_view_indices() OWNER TO postgres;
And the other that seems to tickle the problem:
CREATE OR REPLACE FUNCTION populate_page_view_clickstreams()
RETURNS integer AS
$BODY$
DECLARE
v_row_count INTEGER;
BEGIN
RAISE NOTICE 'Populating page_view clickstreams ...';
UPDATE visit SET status = 'K'
WHERE status = 'I';
INSERT INTO tlink (id, from_id, to_id)
SELECT nextval('hibernate_sequence'),
f.id AS from_id,
t.id AS to_id
FROM page_view f, page_view t, visit v
WHERE f.visit_id = t.visit_id
AND f.visit_id = v.id
AND v.status = 'K'
AND f.tindex = t.tindex - 1
ORDER BY f.visit_id, f.tindex;
INSERT INTO clink (id, from_id, to_id)
SELECT nextval('hibernate_sequence'),
f.id AS from_id,
t.id AS to_id
FROM page_view f, page_view t, visit v
WHERE f.visit_id = t.visit_id
AND f.visit_id = v.id
AND v.status = 'K'
AND f.cindex = t.cindex - 1
ORDER BY f.visit_id, f.cindex;
UPDATE visit SET status = 'L'
WHERE status = 'K';
GET DIAGNOSTICS v_row_count = ROW_COUNT;
RAISE NOTICE 'Done populating page_view clickstreams ...';
RETURN v_row_count;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION populate_page_view_clickstreams() OWNER TO postgres;
On Fri, 16 Mar 2007, Shoaib Mir wrote:
Are the stat collector and autovacuum processes in good shape?
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 3/16/07, Jeff Frost <jeff@xxxxxxxxxxxxxxxxxxxxxx> wrote:
I've got a client that has a function in a db which had been humming along
quite nicely on 2xOpteron 275, PG 8.1.5, 8GB of RAM. Now suddenly many of
the
functions in the DB if called will spike the CPU to 100%. These are
functions
that used to finish in 7ms, now run for 20-40 mins. Interestingly, when
you
strace the backend, it doesn't appear to be doing too much...here's some
sample output:
select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)
semop(3932217, 0x7fbfffd150, 1) = 0
semop(3932217, 0x7fbfffd150, 1) = 0
semop(3932217, 0x7fbfffd150, 1) = 0
semop(3932217, 0x7fbfffd150, 1) = 0
semop(3932217, 0x7fbfffd150, 1) = 0
select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)
semop(3997755, 0x7fbfffd170, 1) = 0
semop(3932217, 0x7fbfffd150, 1) = 0
Any chance we've stumbled into some corner case bug? We actually moved
the DB
to a different server thinking perhaps we had gotten to the limit of slow
hardware, but in fact it happens on the other server as well.
I don't see any ungranted locks in pg_locks, nor are there any other non
idle
queries this time of the night.
I'll see if I can share the function code tomorrow when people are awake
again
in case we have something especially silly in there.
--
Jeff Frost, Owner <jeff@xxxxxxxxxxxxxxxxxxxxxx>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
--
Jeff Frost, Owner <jeff@xxxxxxxxxxxxxxxxxxxxxx>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954