Re: recommended way to add ssd cache to mdraid array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/15/2013 3:02 AM, Tommy Apel wrote:
> Stan: it is true what you are saying about the cache and real life usage
> but if you suspect a problem with the array I would suggest testing array
> rather than the buffer system in linux hence the use if O_DIRECT as that
> will determine the array performance and not the vmem.

If you really want to get to testing only the array you must bypass the
filesystem as well using something like fio for your testing.  Simply
bypassing the buffer cache only removes one obstacle of many in the IO path.

But in Thomas case raw IO numbers still don't clear the fog for him
because he has no idea what the numbers _should_ be in the first place.
 He is assuming, due to lack of knowledge/experience, that write
throughput that is ~1/3rd his read throughput is wrong.  It's not.  It
is expected.

To see this for himself all he need do is blow it away and create a 6
disk RAID10 or layered RAID0 over 3 RAID1 pairs.  When he sees the much
higher iozone write throughput with only 6 of his disks, only 3 spindles
vs 5, then he'll finally understand their is a huge write performance
penalty with RAID6.

-- 
Stan

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux