I have been using dmraid for a few months now to access my two disk
raid0 on my VIA SATA Raid controller. Today I noticed some IO errors
beyond the end of the disk in my syslog and when I investigated, it
looks to me like my raid setup is broken.
The dm table that dmraid creates for the main raid volume device looks
like this:
0 144607678 striped 2 128 8:0 0 8:16 0
Now what I noticed is that 144607678 is NOT an even multiple of the
stripe width of 256 sectors ( 128 sector stripe factor x 2 disks ).
Shouldn't the total length of a striped mapper device allways be an even
multiple of the stripe width? I think that the last fractional stripe
at the end of the disk is the problem, because there aren't a full 128
sectors left on the disk to access before rolling over to the next disk
in the stripe.
So does this mean that my bios created a broken stripe? And why does
dmraid and the kernel device mapper accept such broken values?
Wish me luck with using resizefs.reiserfs to shrink down the volume a
bit to avoid that broken tail end.
_______________________________________________
Ataraid-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ataraid-list