Re: Access cephfs from second public network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



clients also need to access the OSDs and MDS servers


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Thu, Mar 21, 2019 at 1:02 PM Andres Rojas Guerrero <a.rojas@xxxxxxx> wrote:
>
>
> Hi all, we have deployed a Ceph cluster configured with two public networks:
>
> [global]
> cluster network = 10.100.188.0/23
> fsid = 88f62260-b8de-499f-b6fe-5eb66a967083
> mon host = 10.100.190.9,10.100.190.10,10.100.190.11
> mon initial members = mon1,mon2,mon3
> osd_pool_default_pg_num = 4096
> public network = 10.100.190.0/23,10.100.40.0/21
>
> Our problem is that we need to access to cephfs with clients from the
> second public network, for this one  we have deployed a haproxy system
> in transparent mode to enable the access from the clients of second
> network in order to connect with mon (ceph-mon process 6789 tcp port)
> running in the first public networks (10.100.190.0/23). In the haproxy
> configuration we have a frontend in the second public network and the
> backend in the mon network:
>
> frontend cephfs_mon
>
>         timeout client  6000000
>         mode tcp
>         bind 10.100.47.207:6789 transparent
>
>         default_backend ceph1_mon
>
> backend ceph1_mon
>
>         timeout connect 5000
>         source 0.0.0.0 usesrc clientip
>         server mon1 10.100.190.9:6789 check
>
>
> Then we try to mount a cephfs from the client in the second public
> network but we have a timeout:
>
>
> mount -t ceph 10.100.47.207:6789:/ /mnt/cephfs -o
> name=cephfs,secret=AQBOJ5JcXFJAIxAAs4+CBliifhBAD927K9Qaig==
>
> mount: mount 10.100.47.207:6789:/ on /mnt/cephfs failed: Expired
> connection time
>
> I see the traffic back and forth from the client-haproxy-mon system.
>
> Otherwise if the client it's in the first public network we have no
> problem in order to access cephfs resource.
>
> Does anybody experience with this situation?
>
> Thank you very much.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux