Re: Latency for the Public Network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 02/06/2018 04:03 AM, Christian Balzer wrote:
> Hello,
>
> On Mon, 5 Feb 2018 22:04:00 +0100 Tobias Kropf wrote:
>
>> Hi ceph list,
>>
>> we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
>> hammer 0.94.10. 
> Do I smell Proxmox?
Yes we use atm Proxmox
>
>> The cluster is now 3 years old an we plan with a new
>> cluster for a high iops project. We use replicated pools 3/2 and have
>> not the best latency on our switch backend.
>>
>>
>> ping -s 8192 10.10.10.40 
>>
>> 8200 bytes from 10.10.10.40: icmp_seq=1 ttl=64 time=0.153 ms
>>
> Not particularly great, yes.
> However your network latency is only one factor, Ceph OSDs add quite
> another layer there and do affect IOPS even more usually. 
> For high IOPS you need of course fast storage, network AND CPUs. 
Yes we know that... the network is our first job. We plan with new
hardware for mon and osd services with a lot of flash nvme disks and
high ghz cpus.
>
>> We plan to split the hyperconvergent setup to storage an compute nodes
>> and want to split ceph cluster and public network. Cluster network with
>> 40 gbit mellanox switches and public network with the existant 10gbit
>> switches.
>>
> You'd do a lot better if you were to go all 40Gb/s and forget about
> splitting networks. 
Use public and cluster network over the same nics and the same subnet?
>
> The faster replication network will:
> a) be underutilized all of the time in terms of bandwidth 
> b) not help with read IOPS at all
> c) still be hobbled by the public network latency when it comes to write
> IOPS (but of course help in regards to replication latency). 
>
>> Now my question... are 0.153ms - 0.170ms fast enough for the public
>> network? We must deploy a setup with 1500 - 2000 terminalserver....
>>
> Define terminal server, are we talking Windows Virtual Desktops with RDP?
> Windows is quite the hog when it comes to I/O.
Yes we talking about windows virtual desktops with rdp....
Our calculation is... 1x dc= 60-80 IOPS 1x ts = 60-80 IOPS N User * 10
IOPS ...

For this system we want to wort with cache tiering in front with nvme
disk and sata disk on ec pool.  Is this a good idear to use Cache
tiering in this setup?


>
> Regards,
>
> Christian

-- 
Tobias Kropf

 

Technik

 

 

--

inett5-100x56

inett GmbH » Ihr IT Systemhaus in Saarbrücken

Mainzerstrasse 183
66121 Saarbrücken
Geschäftsführer: Marco Gabriel
Handelsregister Saarbrücken
HRB 16588
	

Telefon: 0681 / 41 09 93 – 0
Telefax: 0681 / 41 09 93 – 99
E-Mail: info@xxxxxxxx
Web: www.inett.de

Cyberoam Gold Partner - Zarafa Gold Partner - Proxmox Authorized Reseller - Proxmox Training Center - SEP sesam Certified Partner – Open-E Partner - Endian Certified Partner - Kaspersky Silver Partner – ESET Silver Partner - Mitglied im iTeam Systemhausverbund für den Mittelstand 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux