We have a pre-prod Ceph cluster and working towards a production cluster deployment. I have the following queries and request all your expert tips,
1. Network architecture - We are looking for a private and public network, plan is to have L2 at both the networks. I understand that Object / S3 needs L3 for tenants / users to access outside the network / overlay. What would be your recommendations to avoid any network related latencies, like should we have a tiered network ? We are intending to go with the standard Spine leaf model, with dedicated TOR for Storage and dedicated leafs for Clients/ Hypervisors / Compute nodes.
2. Node Design - We are planning to host nodes with mixed set of drives like NVMe, SSD and NL-SAS all in one node in a specific ratio. This is only to avoid any choking of CPU due to the high performance nodes. Please suggest your opinion.
3, S3 Traffic - What is the secured way to provide object storage in a multi tenant environment since LB/ RGW-HA'd, is going to be in an underlay that cant be exposed to clients/ users in the tenant network. Is there a way to add an external IP as VIP to LB/RGW that could be commonly used by all tenants ?
Thanks in advance.
Regards
Radha Krishnan S
TCS Enterprise Cloud Practice
Tata Consultancy Services
Cell:- +1 848 466 4870
Mailto: radhakrishnan2.s@xxxxxxx
Website: http://www.tcs.com
____________________________________________
Experience certainty. IT Services
Business Solutions
Consulting
____________________________________________
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com