Re: HBA vs caching Raid controller

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > This is what I have when I query prometheus, most hdd's are still sata
> 5400rpm, there are also some ssd's. I also did not optimize cpu
> frequency settings. (forget about the instance=c03, that is just because
> the data comes from mgr c03, these drives are on different hosts)
> >
> > ceph_osd_apply_latency_ms
> >
> > ceph_osd_apply_latency_ms{ceph_daemon="osd.12", instance="c03",
> job="ceph"}	42
> > ...
> > ceph_osd_apply_latency_ms{ceph_daemon="osd.19", instance="c03",
> job="ceph"}	1
> 
> I assume this looks somewhat normal, with a bit of variance due to
> access.
> 
> > avg (ceph_osd_apply_latency_ms)
> > 9.333333333333336
> 
> I see something similar, around 9ms average latency for HDD based osds,
> best case average around 3ms.
> 
> > So I guess it is possible for you to get lower values on the lsi hba
> 
> Can you let me know which exact model you have?

[~]# sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18)
Copyright (c) 2008-2014 LSI Corporation. All rights reserved

     Adapter Selected is a LSI SAS: SAS2308_2(D1)

     Controller Number              : 0
     Controller                     : SAS2308_2(D1)
     PCI Address                    : 00:04:00:00
     SAS Address                    : 500605b-0-05a6-c49e
     NVDATA Version (Default)       : 14.01.00.06
     NVDATA Version (Persistent)    : 14.01.00.06
     Firmware Product ID            : 0x2214 (IT)
     Firmware Version               : 20.00.07.00
     NVDATA Vendor                  : LSI
     NVDATA Product ID              : SAS9207-8i
     BIOS Version                   : 07.39.02.00
     UEFI BSD Version               : N/A
     FCODE Version                  : N/A
     Board Name                     : SAS9207-8i
     Board Assembly                 : N/A
     Board Tracer Number            : N/A

> 
> > Maybe you can tune read a head on the lsi with something like this.
> > echo 8192 > /sys/block/$line/queue/read_ahead_kb
> > echo 1024 > /sys/block/$line/queue/nr_requests
> 
> I tried both of them, even going up to 16MB read ahead cache, but
> besides a short burst when changing the values, the average stays +/-
> the same on that host.
> 
> I also checked cpu speed (same as the rest), io scheduler (using "none"
> really drives the disks crazy). What I observed is that the avq value in
> atop is lower than on the other servers, which are around 15. This
> server is more in the range 1-3.
> 
> > Also check for pci-e 3 those have higher bus speeds.
> 
> True, even though pci-e 2, x8 should be able to deliver 4 GB/s, if I am
> not mistaken.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux