I recall seeing various discussions hoping that it'd been finally fixed - Just wanted to report that this has happened now under postgres 10.4. It looks like this is not related to: 0408e1ed599b06d9bca2927a50a4be52c9e74bb9 which is for "unexpected chunk number" (?) Note that this is on the postgres database, which I think is where I saw it on one of our internal VMs in the past (althought my memory indicates that may have affected multiple DBs). In the immediate case, this is customer's centos6 VM running under qemu/KVM: the same configuration as our internal VM which had this issue (I just found a ticket dated 2016-10-06). In case it helps: - the postgres database has a few things in it, primarily imported CSV logs. On this particular server, there's actually a 150GB table with old CSV logs from an script I fixed recently to avoid saving many lines than intended (something like for each session_id every session_line following an error_severity!='LOG') - I also have copies of pg_stat_bgwriter, pg_settings, and an aggregated copy of pg_buffercache here. - nagios: some scripts loop around all DBs; some maybe connect directly to postgres (for example, to list DBs). However, I don't think check_postgres probably doesn't connect to postgres DB. I'll defer fixing this for awhile in case someone wants me to save a copy of the relation/toast/index. From last time, I recall this just needs the right combination of REINDEX/VACUUM/ANALYZE, and the only complication was me needing to realize the right combination of affected DB(s). Thanks, Justin