Re: Proper Ceph network configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, that's correct.

We use the public/cluster networks exclusively, so in the configuration we specify the MON addresses on the public network, and define both the public/cluster network subnet.  I've not tested, but wonder if it's possible to have the MON addresses on a 1GbE network, then define public/cluster networks in the config and things still operate?


From: "Jon Heese" <jheese@xxxxxxxxx>
To: "Bill Campbell" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Friday, October 23, 2015 10:03:46 AM
Subject: RE: Proper Ceph network configuration

Bill,

 

Thanks for the explanation – that helps a lot.  In that case, I actually want the 10.174.1.0/24 network to be both my cluster and my public network, because I want all “heavy” data traffic to be on that network.  And by “heavy”, I mean large volumes of data, both normal Ceph client traffic and OSD-to-OSD communication.  Contrast this with the more “control plane” connections between the MONs and the OSDs, which we intend to go over the lighter-weight management network.

 

The documentation seems to indicate that the MONs also communicate on the “public” network, but our MONs aren’t currently on that network (we were treating it as an OSD/Client network).  I guess I need to put them on that network…?

 

Thanks.

 

Jon Heese
Systems Engineer
INetU Managed Hosting
P: 610.266.7441 x 261
F: 610.266.7434
www.inetu.net

** This message contains confidential information, which also may be privileged, and is intended only for the person(s) addressed above. Any unauthorized use, distribution, copying or disclosure of confidential and/or privileged information is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify the sender immediately via reply e-mail. **

From: Campbell, Bill [mailto:bcampbell@xxxxxxxxxxxxxxxxxxxx]
Sent: Friday, October 23, 2015 9:11 AM
To: Jon Heese <jheese@xxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Proper Ceph network configuration

 

The "public" network is where all storage accesses from other systems or clients will occur.  When you map RBD's to other hosts, access object storage through the RGW, or CephFS access, you will access the data through the "public" network.  The "cluster" network is where all internal replication between OSD processes will occur.  As an example in our set up, we have a 10GbE public network for hypervisor nodes to access, along with a 10GbE cluster network for back-end replication/communication.  Our 1GbE network is used for monitoring integration and system administration.

 


From: "Jon Heese" <jheese@xxxxxxxxx>
To:
ceph-users@xxxxxxxxxxxxxx
Sent: Friday, October 23, 2015 8:58:28 AM
Subject: Proper Ceph network configuration

 

Hello,

 

We have two separate networks in our Ceph cluster design:

 

10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe, intended to be a management or control plane network

10.174.1.0/24 - The "back end" network, "fat pipe", all OSD nodes use 2x bonded 10Gbe, intended to be the data network

 

So we want all of the OSD traffic to go over the "back end", and the MON traffic to go over the "front end".  We thought the following would do that:

 

public network = 10.197.5.0/24   # skinny pipe, mgmt & MON traffic

cluster network = 10.174.1.0/24  # fat pipe, OSD traffic

 

But that doesn't seem to be the case -- iftop and netstat show that little/no OSD communication is happening over the 10.174.1 network and it's all happening over the 10.197.5 network.

 

What configuration should we be running to enforce the networks per our design?  Thanks!

 

Jon Heese
Systems Engineer
INetU Managed Hosting
P: 610.266.7441 x 261
F: 610.266.7434
www.inetu.net

** This message contains confidential information, which also may be privileged, and is intended only for the person(s) addressed above. Any unauthorized use, distribution, copying or disclosure of confidential and/or privileged information is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify the sender immediately via reply e-mail. **


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.

 



NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux