Hello Andy,Roger,Pascal,All Thanks a lot for your suggestions, yes indeed they are 8 actual HDDs in a Dell Server made into a near=2 layout Raid10 array. I will try out all the options you mentioned. My major concern was how to benchmark this over a longer period of time. I am not very much into performance testing, and hence wanted to have some resources to understand how to benchmark this correctly with good data points to present a case to the application owners. So will continuous capture of sar and iostat be fine enough to give us detailed data around it? I would try out both the ways which you all suggested, manually mark a drive failed to make it go into a degraded state. I will also read more on dm-dust Thanks, Umang On Fri, Oct 21, 2022 at 8:59 PM Andy Smith <andy@xxxxxxxxxxxxxx> wrote: > > Hello, > > On Fri, Oct 21, 2022 at 06:51:41AM -0500, Roger Heflin wrote: > > The original poster needs to get sar or iostat stat to see what the > > actual io rates are, but if they don't understand what the spinning > > disk array can do fully redundant and with a disk failed it is not > > unlikely that the IO load is higher than a can be sustained with a > > single disk failed. > > Though OP is using RAID-10 not RAID-1, and with more than 2 devices > IIRC. OP wants to check the performance and I agree they should do > that for both the normal case and the degraded case, but what are we > expecting *in theory*? For RAID-10 on 4 devices we wouldn't expect > much performance hit would we? Since a read is striped across 2 > devices and there's a mirror of each so it'll read from the good > half of the mirror for each read IO. > > -- > https://bitfolk.com/ -- No-nonsense VPS hosting