Re: Repurposing some Dell R750s for Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Agree with everything Robin wrote here.  RAID HBAs FTL.  Even in passthrough mode, it’s still an [absurdly expensive] point of failure, but a server in the rack is worth two on backorder.

Moreover, I’m told that it is possible to retrofit with cables and possibly an AIC mux / expander.

e.g.
https://www.ebay.com/itm/176400760681

Granted, I haven’t done this personally so I can’t speak to the BOM and procedure.  For OSD nodes it probably isn’t worth the effort.

Some of the LSI^H^H^H^HPERC HBAs — to my astonishment — don’t have a passthrough setting/mode.  This document though implies that this SKU does.



https://www.dell.com/support/manuals/en-ae/poweredge-r7525/perc11_ug/technical-specifications-of-perc-11-cards?guid=guid-aaaf8b59-903f-49c1-8832-f3997d125edf&lang=en-us;


You should be able to set individual drives to passthrough:

storcli64 /call /eall /sall set jbod=on

or depending on the SKU and storcli revision, for the whole HBA

storcli64 /call set personality=JBOD

racadm set Storage.Controller.1.RequestedControllerMode HBA
or
racadm set Storage.Controller.1.RequestedControllerMode EnhancedHBA
then
      jobqueue create RAID.Integrated.1-1
      server action power cycle

LSI and Dell have not been particularly consistent with these beasts.

— aad



>> Hello,
>> 
>> We would like to repurpose some Dell PowerEdge R750s for a Ceph cluster.
>> 
>> Currently the servers have one H755N RAID controller for each 8 drives. (2 total)
> The N variant of H755N specifically? So you have 16 NVME drives in each
> server?
> 
>> I have been asking their technical support what needs to happen in
>> order for us to just rip out those raid controllers and cable the
>> backplane directly to the motherboard/PCIe lanes and they haven't been
>> super enthusiastic about helping me. I get it just buy another 50
>> servers, right? No big deal.
> I don't think the motherboard has enough PCIe lanes to natively connect
> all the drives: the RAID controller effectively functioned as a
> expander, so you needed less PCIe lanes on the motherboard.
> 
> As the quickest way forward: look for passthrough / single-disk / RAID0
> options, in that order, in the controller management tools (perccli etc).
> 
> I haven't used the N variant at all, and since it's NVME presented as
> SCSI/SAS, I don't want to trust the solution of reflashing the
> controller for IT (passthrough) mode.
> 
> -- 
> Robin Hugh Johnson
> Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer
> E-Mail   : robbat2@xxxxxxxxxx
> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux