Re: Possible data corruption with 14.2.3 and 14.2.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Igor,

On 15/11/2019 14:22, Igor Fedotov wrote:

Do you mean both standalone DB and(!!) standalone WAL devices/partitions by having SSD DB/WAL?

No, 1x combined DB/WAL partition on an SSD and 1x data partition on an HDD per OSD. I.e. created like:

ceph-deploy osd create --data /dev/sda --block-db ssd0/ceph-db-disk0
ceph-deploy osd create --data /dev/sdb --block-db ssd0/ceph-db-disk1
ceph-deploy osd create --data /dev/sdc --block-db ssd0/ceph-db-disk2

--block-wal wasn't used.

If so then BlueFS might eventually overwrite some data at you DB volume with BlueFS log content. Which most probably makes OSD crash and unable to restart one day. This is quite random and not very frequent event which is to some degree dependent from cluster loading. And the period between actual data corruption and any evidence of this is non-zero most of the time - we tend to see it mostly when RocksDB was performing compaction.

So this, if I've understood you correctly, is for those with 3 separate (DB + WAL + Data) devices per OSD. Not my setup.

Other OSD configuration which might suffer from the issue is main device + WAL devices.

Much less failure probability exists for main + DB layout. It requires almost full DB to get any chances to appear.

This sounds like my setup: 2 separate (DB/WAL combined + Data) devices per OSD.

Main-only device configurations aren't under the threat as far as I can tell.

And this is for all-in-one devices that aren't at risk. Understood.

While we're waiting for 14.2.5 to be released, what should 14.2.3/4 users with an at risk setup do in the meantime, if anything?

- Check how full their DB devices are?
- Avoid adding new data/load to the cluster?
- Would deep scrubbing detect any undiscovered corruption?
- Get backups ready to restore? I mean, how bad is this?

Thanks,
Simon.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux