Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



spinners are slow anyway, but on top of that SAS disks often default to
writecache=off. In use as a single disk with no risk of raid
write-holes, you can turn on writecache. On SAS, I would assume the
firmware does not lie about writes reaching stable storage (flushes).

    # turn on temporarily:
    sdparm --set=WCE /dev/sdX

    # turn on persistently:
    sdparm --set=WCE --save /dev/sdX


To check current state:

    sdparm --get=WCE /dev/sdf
        /dev/sdf: SEAGATE   ST2000NM0045      DS03
    WCE         0  [cha: y, def:  0, sav:  0]

"WCE 0" means: off
"sav: 0" means: off next time the disk is powered on


Matthias


On Thu, Apr 06, 2023 at 09:26:27AM -0400, Anthony D'Atri wrote:
> How bizarre, I haven’t dealt with this specific SKU before.  Some Dell / LSI HBAs call this passthrough mode, some “personality”, some “jbod mode”, dunno why they can’t be consistent.
> 
> 
> > We are testing an experimental Ceph cluster with server and controller at
> > subject.
> > 
> > The controller have not an HBA mode, but only a 'NonRAID' mode, come sort of
> > 'auto RAID0' configuration.
> 
> Dell’s CLI guide describes setting individual drives in Non-RAID, which *smells* like passthrough, not the more-complex RAID0 workaround we had to do before passthrough.
> 
> https://www.dell.com/support/manuals/en-nz/perc-h750-sas/perc_cli_rg/set-drive-state-commands?guid=guid-d4750845-1f57-434c-b4a9-935876ee1a8e&lang=en-us;
> > 
> > We are using SSD SATA disks (MICRON MTFDDAK480TDT) that perform very well,
> > and SAS HDD disks (SEAGATE ST8000NM014A) that instead perform very bad
> > (particulary, very low IOPS).
> 
> Spinners are slow, this is news?
> 
> That said, how slow is slow?  Testing commands and results or it didn’t happen.
> 
> Also, firmware matters.  Run Dell’s DSU.
> 
> > There's some hint for disk/controller configuration/optimization?
> 
> Give us details, perccli /c0 show, test results etc.  
> 
> Use a different HBA if you have to use an HBA, one that doesn’t suffer an RoC.  Better yet, take an expansive look at TCO and don’t write off NVMe as infeasible.  If your cluster is experimental hopefully you aren’t stuck with a lot of these.  Add up the cost of an RoC HBA, optionally with cache RAM and BBU/supercap, add in the cost delta for SAS HDDs over SATA.  Add in the operational hassle of managing WAL+DB on those boot SSDs.  Add in the extra HDDs you’ll need to provision because of IOPS. 
> 
> > 
> > 
> > Thanks.
> > 
> > -- 
> >  Io credo nella chimica tanto quanto Giulio Cesare credeva nel caso...
> >  mi va bene fino a quando non riguarda me :)	(Emanuele Pucciarelli)
> > 
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux