Re: Multi-Tenancy: Network Isolation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vlad,

Thanks for chiming in.

>>It's not clear what you want to achieve from the ceph point of view? 
Multiple tenancy. We will have multiple tenants from different isolated subnet/network accessing single ceph cluster which can support multiple tenants. The only problem I see with ceph in a physical env setup is I cannot isolate public networks , example mon,mds for multiple subnet/network/tenants.

>>For example, for the network isolation you can use managed switches, set different VLANs and put ceph hosts to the every VLAN.
Yes we have managed switches with VLAN. And if I add for example 2x public interferences on Net1(subnet 192.168.1.0/24) and Net2(subnet 192.168.2.0/24) how does the ceph.conf look like. How does my mon and MDS server config look like, that's the challenge/question.

>>But it's a shoot in the dark as I don't know what exactly you need. For example, what services (block storage, object storage, API etc) you want to offer to your tenants and so on

CephFS and Object. I am familiar on how to get the ceph storage part "tenant friendly", it's just the network part I need to isolate.

--
Deepak

> On May 26, 2017, at 12:03 AM, Дробышевский, Владимир <vlad@xxxxxxxxxx> wrote:
> 
>   It's not clear what you want to achieve from the ceph point of view? For example, for the network isolation you can use managed switches, set different VLANs and put ceph hosts to the every VLAN. But it's a shoot in the dark as I don't know what exactly you need. For example, what services (block storage, object storage, API etc) you want to offer to your tenants and so on
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux