You have multiple bad-blocks list (an MD feature) which are already full
of sectors. Those are earlier disk errors which were stored on MD
headers (one list per drive).
MD will not try to read from such sectors anymore, and during reads MD
will return error to the upper layers immediately. This is if the stripe
does not have enough good components to read after excluding the bad
blocks, e.g. raid5 is able to tolerate up to 1 disk with badblocks in a
stripe, so with 2 badblocks in 2 different disks in the same stripes MD
will return a read error immediately and without trying.
That's why in dmesg you are seeing read errors from MD but not from the
component devices.
Now the question is how could so many badblocks be recorded on your array.
It seems very unlikely that so many disks of your array are in such bad
shape . This might indicate an MD bug in the badblocks code.
I am thinking some form of erroneous propagation of bad blocks, so that
e.g. writing to an area where an MD badblock exists, instead of clearing
the bad block could have propagated the badblock to the other disks in
the same stripe. Something like that.
See if you can check that writing to a bad block clears it. It will be
difficult to compute the correct offset to write to, though. You might
want to do some trials-and-errors with dd together with blktrace. If you
can do that, you might want to check that it behaves correctly even when
writing something that does not align to 512b or 4k . Obviously this
test is desctructive wrt your data in that location.
Another easier test is if to try to read with dd from a component device
itself. If MD has recorded (even if happened long time in the past) a
bad block there, the direct read with dd should also hit it, return
error and stop, because badblocks in the surface of disks do not heal by
themselves with time.
Another test is to read from md0 with dd from an area where you see that
only 1 disk has badblocks (probably requires some trial and error with
blktrace because the offsets of md0 are not equal to the offsets of the
component devices) . If MD works correctly, with such read it should
"heal" the badblock: compute from parity from the other disks, then
write over the badblock. The MD badblock should disappear.
The last 2 tests I described should not be destructive except in case of
MD bugs.
EW
On 02/07/2014 16:14, Pedro Teixeira wrote:
Hi Lars,
the output of those commands:
root@nas3:/# cat /sys/block/sdb/queue/physical_block_size
4096
root@nas3:/# cat /sys/block/md0/queue/physical_block_size
4096
root@nas3:/#
The strange thing here is that dmesg is not poluted with sata errors
like it is usual when a hard disk has bad sectors or some other
hardware problem. the only thing in dmesg that hints to why reading
the md volume fails are from dm itself.
Cheers
Pedro
Citando Lars Täuber
Hi Pedro,
maybe an issue with the logical/physical blocksize?
What tell these commands:
cat /sys/block/sdb/queue/physical_block_size
cat /sys/block/md0/queue/physical_block_size
Seagate says there are 4096 bytes/sector on this devices.
Lars
________________________________________________________________________________
Mensagem enviada através do email grátis AEIOU
http://www.aeiou.pt
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html