Re: Separate Network (RDB, RGW) and CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 1, 2017 at 10:15 AM, Jimmy Goffaux <jimmy@xxxxxxxxxx> wrote:
>
>
> Hello,
>
> I have a cluster CEPH (10.2.5-1trusty) I use the various possibilities:
>
> -Block
>
> - Object
>
> - CephFS
>
>
>
>
>
> root@ih-par1-cld1-ceph-01:~# cat /etc/ceph/ceph.conf
> [....]
> mon_host = 10.4.0.1, 10.4.0.3, 10.4.0.5
> [....]
> public_network = 10.4.0.0/24
> cluster_network = 192.168.33.0/24
> [....]
>
> I have dedicated servers for the storage Block, Object and other servers for
> CephFS (Full SSD):
>
>
>
>
> root@ih-par1-cld1-ceph-01:~# ceph osd tree
> ID  WEIGHT    TYPE NAME                     UP/DOWN REWEIGHT
> PRIMARY-AFFINITY
>  -6   2.79593 root ssdforcephfs
>  -7   0.46599     host ih-prd-cephfs-02
>  32   0.23299         osd.32                     up  1.00000
> 1.00000
>  33   0.23299         osd.33                     up  1.00000
> 1.00000
>  -8   0.46599     host ih-prd-cephfs-03
>  34   0.23299         osd.34                     up  1.00000
> 1.00000
>  35   0.23299         osd.35                     up  1.00000
> 1.00000
>  -9   0.46599     host ih-prd-cephfs-05
>  36   0.23299         osd.36                     up  1.00000
> 1.00000
>  37   0.23299         osd.37                     up  1.00000
> 1.00000
> -10   0.46599     host ih-prd-cephfs-01
>  38   0.23299         osd.38                     up  1.00000
> 1.00000
>  39   0.23299         osd.39                     up  1.00000
> 1.00000
> -11   0.46599     host ih-prd-cephfs-04
>  40   0.23299         osd.40                     up  1.00000
> 1.00000
>  41   0.23299         osd.41                     up  1.00000
> 1.00000
> -12   0.46599     host ih-prd-cephfs-07
>  42   0.23299         osd.42                     up  1.00000
> 1.00000
>  43   0.23299         osd.43                     up  1.00000
> 1.00000
>  -1 116.47998 root default
>  -2  43.67999     host ih-par1-cld1-ceph-01
>   0   3.64000         osd.0                      up  1.00000
> 1.00000
>   2   3.64000         osd.2                      up  1.00000
> 1.00000
>   6   3.64000         osd.6                      up  1.00000
> 1.00000
>   8   3.64000         osd.8                      up  1.00000
> 1.00000
>  15   3.64000         osd.15                     up  1.00000
> 1.00000
>  16   3.64000         osd.16                     up  1.00000
> 1.00000
>  19   3.64000         osd.19                     up  1.00000
> 1.00000
>  22   3.64000         osd.22                     up  1.00000
> 1.00000
>  24   3.64000         osd.24                     up  1.00000
> 1.00000
>  26   3.64000         osd.26                     up  1.00000
> 1.00000
>  28   3.64000         osd.28                     up  1.00000
> 1.00000
>  30   3.64000         osd.30                     up  1.00000
> 1.00000
>  -3  43.67999     host ih-par1-cld1-ceph-03
>   1   3.64000         osd.1                      up  1.00000
> 1.00000
>   3   3.64000         osd.3                      up  1.00000
> 1.00000
>   5   3.64000         osd.5                      up  1.00000
> 1.00000
>   7   3.64000         osd.7                      up  1.00000
> 1.00000
>  13   3.64000         osd.13                     up  1.00000
> 1.00000
>   4   3.64000         osd.4                      up  1.00000
> 1.00000
>  20   3.64000         osd.20                     up  1.00000
> 1.00000
>  23   3.64000         osd.23                     up  1.00000
> 1.00000
>  25   3.64000         osd.25                     up  1.00000
> 1.00000
>  27   3.64000         osd.27                     up  1.00000
> 1.00000
>  29   3.64000         osd.29                     up  1.00000
> 1.00000
>  31   3.64000         osd.31                     up  1.00000
> 1.00000
>  -5  29.12000     host ih-par1-cld1-ceph-05
>   9   3.64000         osd.9                      up  1.00000
> 1.00000
>  10   3.64000         osd.10                     up  1.00000
> 1.00000
>  11   3.64000         osd.11                     up  1.00000
> 1.00000
>  12   3.64000         osd.12                     up  1.00000
> 1.00000
>  14   3.64000         osd.14                     up  1.00000
> 1.00000
>  17   3.64000         osd.17                     up  1.00000
> 1.00000
>  18   3.64000         osd.18                     up  1.00000
> 1.00000
>  21   3.64000         osd.21                     up  1.00000
> 1.00000
>
>
>
>
>
> I use OpenNebula for the use of RDB on the public network: 10.4.0.0/16.
>
> I shall like separating the RDB, RGW network and CephFS... I have my
> customers CephFS who can accèder to all the network RBD, the hypervisors
> OpenNebula
>
> Example:
>
> Customer A (CephFS, path: /client1) = > Reaches at present all the network
> 10.4.0.0/16
> Customer B (CephFS, path: /client2) = > Reaches at present all the network
> 10.4.0.0/16
>
> How is it possible to separate networks: RBD, RGW and have multiple access
> networks for CephFS?

CephFS clients talk directly to OSDs, like RBD clients -- if you want
to avoid giving your CephFS clients access to your Ceph public network
then the simplest way to accomplish that is to access the filesysten
via an NFS server.

OTOH if you just want your CephFS clients to be on the public network,
but be unable to reach the non-CephFS clients (i.e. hypervisors), then
that just calls for a firewall.

John

>
> I hope to have been clear:/
>
> Thank you
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux