Re: HBA or RAID-0 + BBU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



LSI 9266/9271 as well in an affected range unless ECO’d

> On Apr 19, 2023, at 3:13 PM, Sebastian <sebcio.t@xxxxxxxxx> wrote:
> 
> I want add one thing to what other says, we discussed this between Cephalocon sessions, avoid HP controllers p210/420, or upgrade firmware to latest. 
> These controllers has strange bug, during high workload they restart itself.
> 
> BR, 
> Sebastian
> 
>> On 19 Apr 2023, at 08:39, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
>> 
>> Den ons 19 apr. 2023 kl 00:55 skrev Murilo Morais <murilo@xxxxxxxxxxxxxx>:
>>> Good evening everyone!
>>> Guys, about the P420 RAID controller, I have a question about the operation
>>> mode: What would be better: HBA or RAID-0 with BBU (active write cache)?
>> 
>> As already said, always give ceph (and zfs and btrfs..) the raw disks
>> to handle for itself, instead of doing hardware striping/raiding and
>> so on.
>> 
>> There are multiple reasons for this, including weird/bad firmware and
>> (for raid0) loss of redundancy and so on, but there are also corner
>> cases where you may need/want to move disks from one box to another.
>> If you use "raw" disks, then what is on the platters most often is 1:1
>> to what the computer sees, and hence if you move the drive over to
>> another box, it will not matter if it has a HBA or Raid card, what
>> brand of disk controller it is and so on. The new server will also see
>> 1:1 of the data the former machine wrote and this is what you want.
>> 
>> If you have a raid0/jbod/raid1/raidX setup on a particular raid
>> controller card with a particular firmware version, you may not be
>> able to move your complete raid set over to a newer box. Perhaps the
>> raid card model is no longer available, perhaps the raid firmware
>> logic is new and different on how to stripe data or place sectors on
>> the raid members. Perhaps it detects disks in the wrong order or
>> something.
>> 
>> It is very possible that you have zero problems with raid setups, but
>> the extra complexities when moving raid sets from one generation of
>> computer/controller to another makes the risk of some kind of
>> portability issue non-zero. Old grumpy storage admins don't like
>> non-zero risks if we actually can avoid it.
>> 
>> -- 
>> May the most significant bit of your life be positive.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux