08.06.2022 23:52, Wol пишет:
On 08/06/2022 04:48, Pavel wrote:
Did you dd the raid device (/dev/md0 for example), or the individual
nvme devices?
There was LVM over /dev/md0, and dd transferred LVM volumes data.
While data in transfer, kernel started IO errors reporting on one of
NVMe devices. (dmesg output below)
But md-raid not reacted on them in any way. RAID array not went into
any failed state, and "clean" state reported all the time.
This is actually normal, correct and expected behaviour. If the raid
layer does not report a problem to dd, the data should have copied
correctly. And raid really only reports problems if it gets write
failures.
Yes, but data was not copied correctly.
Unfortunately, you're missing a lot of detail to help us diagnose the
problem. What raid level are you using, for starters. It sounds like
there is a problem, but as Mariusz implies, it looks like a faulty
nVME device. And if that device is lying to linux, as appears likely
(my guess is that raid is trying to fix the data, and the drive is
just losing the writes),
Feel free to ask. Raid level: RAID 1, built over partitions on two NVMe
devices.
Yes, drive is "just" losing the writes. But there is nothing "to fix" on
RAID level.
From my user POV, RAID should detect the loss and take appropriate
actions (mark device as failed).
I don't know, if NVMe layer lies to kernel or not, but I clearly see
"I/O error, dev nvme0n1, sector 1297536456 op 0x1:(WRITE) flags 0x0
phys_seg 1 prio class 0"
messages, and I expect they clearly mean write failure.
then there is precious little we can do about it.
As a kernel user, I did all I might to do - posted an report here.
As a kernel developers, you can do a bit more, than users.
Thanks for your answers.
Regards,
Pavel.