On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote: > On 8 September 2017 at 11:00, Valeri Galtsev <galtsev@xxxxxxxxxxxxxxxxx> > wrote: >> >> On Fri, September 8, 2017 9:48 am, hw wrote: >>> m.roth@xxxxxxxxx wrote: >>>> hw wrote: >>>>> Mark Haney wrote: >>>> <snip> >>>>>> BTRFS isn't going to impact I/O any more significantly than, say, >>>>>> XFS. >>>>> >>>>> But mdadm does, the impact is severe. I know there are ppl saying >>>>> otherwise, but I�´ve seen the impact myself, and I definitely >>>>> don�´t >>>>> want >>>>> it on that particular server because it would likely interfere with >>>>> other services. >>>> <snip> >>>> I haven't really been following this thread, but if your requirements >>>> are >>>> that heavy, you're past the point that you need to spring some money >>>> and >>>> buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought >>>> them >>>> more recently? >>> >>> Heavy requirements are not required for the impact of md-RAID to be >>> noticeable. >>> >>> Hardware RAID is already in place, but the SSDs are "extra" and, as I >>> said, >>> not suited to be used with hardware RAID. >> >> Could someone, please, elaborate on the statement that "SSDs are not >> suitable for hardware RAID". >> > > It will depend on the type of SSD and the type of hardware RAID. There > are at least 4 different classes of SSD drives with different levels > of cache, write/read performance, number of lifetime writes, etc. > There are also multiple types of hardware RAID. A lot of hardware RAID > will try to even out disk usage in different ways. This means 'moving' > the heavily used data from slow parts to fast parts etc etc. Wow, you learn something every day ;-) Which hardware RAIDs do these moving of data (manufacturer/model, please - believe it or not I never heard of that ;-). And "slow part" and "fast part" of what are data being moved between? Thanks in advance for tutorial! Valeri > On an SSD > all these extra writes aren't needed and so if the hardware RAID > doesn't know about SSD technology it will wear out the SSD quickly. > Other hardware raid parts that can cause faster failures on SSD's are > where it does test writes all the time to see if disks are bad etc. > Again if you have gone with commodity SSD's this will wear out the > drive faster than expected and boom bad disks. > > That said, some hardware RAID's are supposedly made to work with SSD > drive technology. They don't do those extra writes, they also assume > that the disks underneath will read/write in near constant time so > queueing of data is done differently. However that stuff costs extra > money and not usually shipped in standard OEM hardware. > > >> Thanks. >> Valeri >> >>> >>> It remains to be tested how the hardware RAID performs, which may be >>> even >>> better than the SSDs. >>> _______________________________________________ >>> CentOS mailing list >>> CentOS@xxxxxxxxxx >>> https://lists.centos.org/mailman/listinfo/centos >>> >> >> >> ++++++++++++++++++++++++++++++++++++++++ >> Valeri Galtsev >> Sr System Administrator >> Department of Astronomy and Astrophysics >> Kavli Institute for Cosmological Physics >> University of Chicago >> Phone: 773-702-4247 >> ++++++++++++++++++++++++++++++++++++++++ >> _______________________________________________ >> CentOS mailing list >> CentOS@xxxxxxxxxx >> https://lists.centos.org/mailman/listinfo/centos > > > > -- > Stephen J Smoogen. > _______________________________________________ > CentOS mailing list > CentOS@xxxxxxxxxx > https://lists.centos.org/mailman/listinfo/centos > ++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++ _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx https://lists.centos.org/mailman/listinfo/centos