On 04/24/2010 09:36 PM, Janos Haar wrote:
Ok, i am doing it.
I think i have found some interesting, what is unexpected:
After 99.9% (and another 1800minute) the array is dropped the
dm-snapshot structure!
...[CUT]...
raid5:md3: read error not correctable (sector 2923767944 on dm-0).
raid5:md3: read error not correctable (sector 2923767952 on dm-0).
raid5:md3: read error not correctable (sector 2923767960 on dm-0).
raid5:md3: read error not correctable (sector 2923767968 on dm-0).
raid5:md3: read error not correctable (sector 2923767976 on dm-0).
raid5:md3: read error not correctable (sector 2923767984 on dm-0).
raid5:md3: read error not correctable (sector 2923767992 on dm-0).
raid5:md3: read error not correctable (sector 2923768000 on dm-0).
...[CUT]...
So, the dm-0 is dropped only for _READ_ error!
Actually no, it is being dropped for "uncorrectable read error" which
means, AFAIK, that the read error was received, then the block was
recomputed from the other disks, then a rewrite of the damaged block was
attempted, and such *write* failed. So it is being dropped for a *write*
error. People correct me if I'm wrong.
This is strange because the write should have gone to the cow device.
Are you sure you did everything correctly with DM? Could you post here
how you created the dm-0 device?
We might ask to the DM people why it's not working maybe. Anyway there
is one good news, and it's that the read error apparently does travel
through the DM stack.
Thanks for your work
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html