Re: ssacli start rebuild?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]




> On Nov 14, 2020, at 8:45 PM, hw <hw@xxxxxxxx> wrote:
> 
> On Sat, 2020-11-14 at 18:55 +0100, Simon Matter wrote:
>>> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
>>>> On Nov 11, 2020, at 2:01 PM, hw <hw@xxxxxxxx> wrote:
>>>>> I have yet to see software RAID that doesn't kill the performance.
>>>> 
>>>> When was the last time you tried it?
>>> 
>>> I'm currently using it, and the performance sucks.  Perhaps it's
>>> not the software itself or the CPU but the on-board controllers
>>> or other components being incable handling multiple disks in a
>>> software raid.  That's something I can't verify.
>>> 
>>>> Why would you expect that a modern 8-core Intel CPU would impede I/O in
>>>> any measureable way as compared to the outdated single-core 32-bit RISC
>>>> CPU typically found on hardware RAID cards?  These are the same CPUs,
>>>> mind, that regularly crunch through TLS 1.3 on line-rate fiber Ethernet
>>>> links, a much tougher task than mediating spinning disk I/O.
>>> 
>>> It doesn't matter what I expect.
>>> 
>>>>> And where
>>>>> do you get cost-efficient cards that can do JBOD?
>>>> 
>>>> $69, 8 SATA/SAS ports: https://www.newegg.com/p/0ZK-08UH-0GWZ1
>>> 
>>> That says it's for HP.  So will you still get firmware updates once
>>> the warranty is expired?  Does it exclusively work with HP hardware?
>>> 
>>> And are these good?
>>> 
>>>> Search for “LSI JBOD” for tons more options.  You may have to fiddle
>>>> with the firmware to get it to stop trying to do clever RAID stuff,
>>>> which lets you do smart RAID stuff like ZFS instead.
>>>> 
>>>>> What has HP been thinking?
>>>> 
>>>> That the hardware vs software RAID argument is over in 2020.
>>>> 
>>> 
>>> Do you have a reference for that, like a final statement from HP?
>>> Did they stop developing RAID controllers, or do they ship their
>>> servers now without them and tell customers to use btrfs or mdraid?
>> 
>> HPE and the other large vendors won't tell you directly because they love
>> to sell you their outdated SAS/SATA Raid stuff. They were quite slow to
>> introduce NVMe storage, be it as PCIe cards or U.2 format, but it's also
>> clear to them that NVMe is the future and that it's used with software
>> redundancy provided by MDraid, ZFS, Btrfs etc. Just search for HPE's
>> 4AA4-7186ENW.pdf file which also mentions it.
>> 
>> In fact local storage was one reason why we turned away from HPE and Dell
>> after many years because we just didn't want to invest in outdated
>> technology.
>> 
> 
> I'm currently running an mdadm raid-check and two RAID-1 arrays, and the
> server shows 2 processes with 24--27% CPU each and two others around 5%.
> And you want to tell me that the CPU load is almost non-existent.

Hardware vs software RAID discussion is like a clash of two different religions. I, BTW, on your religious side: hardware RAID. For different reason: in hardware RAID it is small piece of code (hence well debugged), and dedicated hardware. Thus, things like kernel panic (of the main system, the one that would be running software RAID) does not affect hardware RAID function, whereas software RAID function will not be fulfilled in case of kernel panic. Whereas unclean filesystem can be dealt with, “unclean” RAID pretty much can not.

But again, it is akin religion, and after both sides shoot out all their ammunition, everyone returns back being still on the same side one was before the “discussion”.

So, I would just suggest… Hm, never mind, everyone, do what you feel right ;-)

Valeri

> I've also constantly seen much better performance with hardware RAID than
> with software RAID over the years and ZFS having the worst performance of
> anything, even with SSD caches.
> 
> It speaks for itself, and, like I said, I have yet to see a software RAID
> that doesn't bring the performance down.  Show me one that doesn't.
> 
> Are there any hardware RAID controllers designed for NVMe storage you could
> use to compare software RAID with?  Are there any ZFS or btrfs hardware
> controllers you could compare with?
> 
> 
> _______________________________________________
> CentOS mailing list
> CentOS@xxxxxxxxxx
> https://lists.centos.org/mailman/listinfo/centos

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux