That depends on if you have triggers that are doing selects. But in general you are correct, analyze wont help inserts.
I do have some, actually. I have a couple trigger functions like:
CREATE OR REPLACE FUNCTION locations_quiet_unique_violation()
RETURNS trigger AS
$BODY$
BEGIN
IF EXISTS (SELECT 1 FROM public.locations WHERE geohash = NEW.geohash) THEN
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
that are triggered thusly:
CREATE TRIGGER locations_check_unique_violation
BEFORE INSERT
ON locations
FOR EACH ROW
EXECUTE PROCEDURE locations_quiet_unique_violation();
I left auto-vacuum enabled for those tables.
checkpoint_segments can help insert speed, what do you have that set to?
40. Checking http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server it looks like setting that as high as 256 would not necessarily be unreasonable. What do you think?
Also how you insert can make things faster too. (insert vs prepared vs COPY)
I'm doing this all with INSERT. Is COPY that much faster? I don't know anything about prepared.
Also, if you have too many indexes on a table that can cause things to slow down.
Yeah, got that. I removed a bunch. I'd rather not remove what's left unless I have to.
Your IO layer needs to be fast too. Have you watched vmstat and iostat?
I don't know if I have access to vmstat and iostat. Heroku is hosting this for me on AWS.
Have you read up on synchronous_commit?
Only a tiny bit. A couple people suggested disabling it since my database is being hosted on AWS so I did that. It seems a bit risky but perhaps worth it.