> From your description it sounds like it's happening in the middle of streaming, right?
> I did find this similar complaint that involves an ext4 primary and a btrfs replica:
It is interesting that my issue occurs on the first hop from ZFS to ext4. I have not seen any instances of this happening going from the ext4 primary to the first ZFS replica.
Do you know what version of ZFS that effected? We're currently on 0.6.5.6, but could upgrade to 0.7.5 on Ubuntu 18.04
Yes, all the servers have registered ECC RAM.
Correct. None of the instances in the chain experience a crash. Most of the time I see the "incorrect resource manager data checksum in record" error, but I've also seen it manifested as:
invalid magic number 8813 in log segment 000000030000AEC20000009C, offset 15335424
invalid magic number 8813 in log segment 000000030000AEC20000009C, offset 15335424
> I did find this similar complaint that involves an ext4 primary and a btrfs replica:
It is interesting that my issue occurs on the first hop from ZFS to ext4. I have not seen any instances of this happening going from the ext4 primary to the first ZFS replica.
> We did have a report recently of ZFS recycling WAL files very slowly
Do you know what version of ZFS that effected? We're currently on 0.6.5.6, but could upgrade to 0.7.5 on Ubuntu 18.04
> Does your machine have ECC RAM?
Yes, all the servers have registered ECC RAM.
---
I'm considering changing the replication configuration from:
ext4 -> zfs -> ext4 -> zfs
to
ext4 -> zfs -> zfs -> ext4
If the issue only occurs downstream of ZFS, this will give me twice as many chances for it to occur, and I would expect to see some instances where only the last ext4 node is effected, and some where the last ZFS and the last ext4 node is effected. Not sure how much it helps, but at least I might be able to collect more data until I find a reliable way to reproduce.
I'm considering changing the replication configuration from:
ext4 -> zfs -> ext4 -> zfs
to
ext4 -> zfs -> zfs -> ext4
If the issue only occurs downstream of ZFS, this will give me twice as many chances for it to occur, and I would expect to see some instances where only the last ext4 node is effected, and some where the last ZFS and the last ext4 node is effected. Not sure how much it helps, but at least I might be able to collect more data until I find a reliable way to reproduce.
---
FYI, I'd be happy to discuss paid consulting if this is an issue in your wheelhouse and that's something you're interested in.
On Thu, Jun 28, 2018 at 4:13 PM Thomas Munro <thomas.munro@xxxxxxxxxxxxxxxx> wrote:
On Fri, Jun 29, 2018 at 5:44 AM, Devin Christensen <quixoten@xxxxxxxxx> wrote:
> The pattern is the same, regardless of ubuntu or postgresql versions. I'm
> concerned this is somehow a ZFS corruption bug, because the error always
> occurs downstream of the first ZFS node and ZFS is a recent addition. I
> don't know enough about what this error means, and haven't found much
> online. When I restart the nodes effected, replication resumes normally,
> with no known side-effects that I've discovered so far, but I'm no longer
> confident that the data downstream from the primary is valid. Really not
> sure how best to start tackling this issue, and hoping to get some guidance.
> The error is infrequent. We have 11 total replication chains, and this error
> has occurred on 5 of those chains in approximately 2 months.It's possible and sometimes expected to see that error when there has been a crash, but you didn't mention that. From your description it sounds like it's happening in the middle of streaming, right? My first thought was that the filesystem change is surely a red herring. But... I did find this similar complaint that involves an ext4 primary and a btrfs replica:I'm having trouble imagining how the filesystem could be triggering a problem though (unless ZoL is dramatically less stable than on other operating systems, "ZFS ate my bytes" seems like a super unlikely theory). Perhaps by being slower, it triggers a bug elsewhere? We did have a report recently of ZFS recycling WAL files very slowly (presumably because when it moves the old file to become the new file, it finishes up slurping it back into memory even though we're just going to overwrite it, and it can't see that because our writes don't line up with the ZFS record size, possibly unlike ye olde write-in-place 4k block filesystems, but that's just my guess). Does your machine have ECC RAM?