On 08/06/2022 04:48, Pavel wrote:
Hi, linux-raid community.
Today we found a strange and even scaring behavior of md-raid RAID based
on NVMe devices.
We ordered new server, and started data transfer (using dd, filesystems
was unmounted on source, etc - no errors here).
Did you dd the raid device (/dev/md0 for example), or the individual
nvme devices?
While data in transfer, kernel started IO errors reporting on one of
NVMe devices. (dmesg output below)
But md-raid not reacted on them in any way. RAID array not went into any
failed state, and "clean" state reported all the time.
This is actually normal, correct and expected behaviour. If the raid
layer does not report a problem to dd, the data should have copied
correctly. And raid really only reports problems if it gets write failures.
Based on earlier practice, we trusted md-raid and thought things goes ok.
After data transfer finished, server was turned off and cables was
replaced on suspicion.
After OS started on this new server, we found MySQL crashes.
Thorough checksum check showed us mismatches on files content.
(Of course, we did checksumming of untouched files, not MySQL database
files)
So, there is data-loss possible when NVMe device behaves wrong.
We think, md-raid has to remove failed device from raid in a such case.
That it didn't happen is wrong behaviour, so want to inform community
about finding.
Hope, this will help to make kernel ever better.
Thanks for your work.
Unfortunately, you're missing a lot of detail to help us diagnose the
problem. What raid level are you using, for starters. It sounds like
there is a problem, but as Mariusz implies, it looks like a faulty nVME
device. And if that device is lying to linux, as appears likely (my
guess is that raid is trying to fix the data, and the drive is just
losing the writes), then there is precious little we can do about it.
Cheers,
Wol