Re: RBD with SSD journals and SAS OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of William Josefsson
> Sent: 20 October 2016 10:25
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  RBD with SSD journals and SAS OSDs
> 
> On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > Did you also set /check the c-states, this can have a large impact as well?
> 
> Hi Nick. I did try intel_idle.max_cstate=0, and I've got quite a significant improvement as attached below. Thanks for this
advice!
> This is still with DIRECT=1, SYNC=1, BS=4k, RW=WRITE.

Excellent, glad it worked for you. It surprising what the power saving features can do to bursty performance.

> 
> I wanted also to ask you about Numa. Some argue it should be disabled for high performance. My hosts are Dual Socket, 2x2630v4
> 2.2Ghz. Do you have any suggestions around whether enable or disable numa and what would be the Impact? Thx will

I don't have much experience around this area other than that I know that numa can impact performance, hence the reason all my
recent OSD nodes have been single socket. I took the easy option :-)

There are two things you need to be aware of I think

1. Storage and network controllers could be connected via different sockets causing data to be dragged over the interconnect bus.
There isn't much you can do about this, apart from carefully placement of pci-e cards, but 1 socket will always suffer.

2. OSD processes flipping between sockets. I think this has been discussed here in the past. I believe some gains could be achieved
by pinning the OSD process to certain cores, but I'm afraid your best bet would be to search the archives as I can't really offer
much advice.


> 
> 
> 
> simple-write-62: (groupid=14, jobs=62): err= 0: pid=2133: Thu Oct 20
> 12:47:58 2016
>   write: io=1213.8MB, bw=41421KB/s, iops=10355, runt= 30006msec
>     clat (msec): min=2, max=81, avg= 5.99, stdev= 2.72
>      lat (msec): min=2, max=81, avg= 5.99, stdev= 2.72
>     clat percentiles (usec):
>      |  1.00th=[ 2864],  5.00th=[ 3184], 10.00th=[ 3376], 20.00th=[ 3696],
>      | 30.00th=[ 3984], 40.00th=[ 4576], 50.00th=[ 6048], 60.00th=[ 6688],
>      | 70.00th=[ 7264], 80.00th=[ 7712], 90.00th=[ 8640], 95.00th=[ 9920],
>      | 99.00th=[12480], 99.50th=[13248], 99.90th=[38656], 99.95th=[41728],
>      | 99.99th=[81408]
>     bw (KB  /s): min=  343, max= 1051, per=1.62%, avg=669.64, stdev=160.04
>     lat (msec) : 4=30.01%, 10=65.31%, 20=4.53%, 50=0.12%, 100=0.02%
>   cpu          : usr=0.04%, sys=0.54%, ctx=636287, majf=0, minf=1905
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      issued    : total=r=0/w=310721/d=0, short=r=0/w=0/d=0
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux