Re: Micron SSD/Basic Config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, so 100G seems to be the better choice. I will probably go with some of these. 

[ https://www.fs.com/products/75808.html | https://www.fs.com/products/75808.html ] 





From: "Paul Emmerich" <paul.emmerich@xxxxxxxx> 
To: "EDH" <mriosfer@xxxxxxxxxxxxxxxx> 
Cc: "adamb" <adamb@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxx> 
Sent: Friday, January 31, 2020 8:49:29 AM 
Subject: Re:  Re: Micron SSD/Basic Config 

On Fri, Jan 31, 2020 at 2:06 PM EDH - Manuel Rios 
<mriosfer@xxxxxxxxxxxxxxxx> wrote: 
> 
> Hmm change 40Gbps to 100Gbps networking. 
> 
> 40Gbps technology its just a bond of 4x10 Links with some latency due link aggregation. 
> 100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of the latency comes from Network commits and the other 50% from disk commits. 

40G ethernet is not the same as 4x 10G bond. A bond load balances on a 
per-packet (or well, per flow usually) basis. A 40G link uses all four 
links even for a single packet. 
100G is "just" 4x 25G 

I also wouldn't agree that network and disk latency is a 50/50 split 
in Ceph unless you have some NVRAM disks or something. 

Even for the network speed the processing and queuing in the network 
stack dominates over the serialization delay from a 40G/100G 
difference (4kb at 100G is 320ns, and 800ns at 40G for the 
serialization; I don't have any figures for processing times on 
40/100G ethernet, but 10G fiber is at 300ns, 10G base-t at 2300 
nanoseconds) 

Paul 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux