This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4 hours) slowness with UPDATEs on a table that has a GIN index on a JSONB column. During these episodes, UPDATEs that normally take < 1sec take upwards of 2-4 minutes, and all finish simultaneously, like they were all blocked on something and finally got released.
My team and I have done the usual stuff looking at the system itself and its underlying infrastructure, and we don't think that's it. It does seem to be localized to just this table.
We're reading about GIN fast updates and noted that:
As of PostgreSQL 8.4, GIN is capable of postponing much of this work by inserting new tuples into a temporary, unsorted list of pending entries. When the table is vacuumed, or if the pending list becomes too large (larger than work_mem), the entries are moved to the main GIN data structure using the same bulk insert techniques used during initial index creation. This greatly improves GIN index update speed, even counting the additional vacuum overhead. Moreover the overhead work can be done by a background process instead of in foreground query processing.
The main disadvantage of this approach is that searches must scan the list of pending entries in addition to searching the regular index, and so a large list of pending entries will slow searches significantly. Another disadvantage is that, while most updates are fast, an update that causes the pending list to become "too large" will incur an immediate cleanup cycle and thus be much slower than other updates.
I am wondering if the UPDATEs are hitting some blockage on the table due to this cleanup cycle. Is this a possibility, and if so how can I investigate this? What queries can I run or stats can I check to see if this pending list is approaching work_mem, or we're hitting this cleanup cycle? Is there any logging I can enable to see when these cleanups are occurring?
One test we are doing right now is running VACUUMs continuously on the table to see if this slowness stops happening. We should know within a few hours. If this seems to help, we'll try turning off FASTUPDATE on the index.
Any other ideas or suggestions would be appreciated, thanks!
My non-default postgres settings are [here in a gist](https://gist.github.com/skehlet/fd4945f9c9ce6ed075f1).