Sequential writing to degraded RAID6 causing a lot of reading

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello boys,

I am running some RAID6 arrays in degraded mode, one with
left-symmetry layout and one with left-symmetry-6 layout. I am
experiencing (potentially strange) behavior that degrades performance
of both arrays.

When I am writing sequentially a lot of data to healthy RAID5 array,
it also reads internally a bit of data. I have data on arrays, so I
only write through the filesystem. So I am not sure what causing the
reads, if writing through filesystem potentially causes skipping and
not writing whole stripes  or sometimes timing causes that the whole
stripe is not written at the same time. But anyway there is only a
small ratio of reads and the performance is almost OK.

I cant test it with full healthy RAID6 array, because I dont have any
at the moment.

But when I write sequentially to RAID6 without one drive (again
through filesystem) I get almost exactly the same amount of internal
reads as writes. Is it by design and is this expected behaviour? Why
does it behave like this? It should behave exactly like healthy RAID5,
it should detect the writing of whole stripe and should not read
(almost) anything.

Thanks.

Patrik
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux