Re: Adding second interface to storage network - issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jumbo frames for the cluster network has been done by quite a few operators without any problems. Admittedly, I’ve not run it that way in a year now, but we plan on switching back to jumbo for the cluster.

 

I do agree that jumbo on the public could result in poor behavior from clients, if you’re not careful.

 

Warren Wang

Walmart 

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of John Petrini <jpetrini@xxxxxxxxxxxx>
Date: Wednesday, November 30, 2016 at 1:09 PM
To: Mike Jacobacci <mikej@xxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Adding second interface to storage network - issue

 

Yes that should work. Though I'd be weary of increasing the MTU to 9000 as this could introduce other issues. Jumbo Frames don't provide a very significant performance increase so I wouldn't recommend it unless you have a very good reason to make the change. If you do want to go down that path I'd suggest getting LACP configured on all of the nodes before upping the MTU and even then make sure you understand the requirement of a larger MTU size before introducing it on your network.


___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //   coredial.com   //   witter   inkedIn   oogle Plus   log 
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422 
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetrini@xxxxxxxxxxxx

xceptional people. Proven Processes. Innovative Technology. Discover

The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission,  dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

 

On Wed, Nov 30, 2016 at 1:01 PM, Mike Jacobacci <mikej@xxxxxxxxxx> wrote:

Hi John,


Thanks that makes sense... So I take it if I use the same IP for the bond, I shouldn't run into the issues I ran into last night?

 

Cheers,

Mike

 

On Wed, Nov 30, 2016 at 9:55 AM, John Petrini <jpetrini@xxxxxxxxxxxx> wrote:

For redundancy I would suggest bonding the interfaces using LACP that way both ports are combined under the same interface with the same IP. They will both send and receive traffic and if one link goes down the other continues to work. The ports will need to be configured for LACP on the switch as well.


___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //   coredial.com   //   witter   inkedIn   oogle Plus   log 
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422 
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetrini@xxxxxxxxxxxx

xceptional people. Proven Processes. Innovative Technology. Discover

The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission,  dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

 

On Wed, Nov 30, 2016 at 12:15 PM, Mike Jacobacci <mikej@xxxxxxxxxx> wrote:

I ran into an interesting issue last night when I tried to add a second storage interface.  The original 10gb storage interface on the OSD node was only set at 1500 MTU, so the plan was to bump it to 9000 and configure the second interface the same way with a diff IP and reboot. Once I did that, for some reason the original interface showed active but would not respond to ping from the other OSD nodes, the second interface I added came up and was reachable.  So even though the node could still communicate to the others on the second interface, PG's would start remapping and would get stuck at about 300 (of 1024).  I resolved the issue by changing the config back on the original interface and disabling the second.  After a Reboot, PG's recovered very quickly.  

 

It seemed that the remapping would only go partially because the first node could reach the others, but they couldn't reach the original interface and didn't use the newly added second. So for my questions:

 

Is there a proper way to add an additional interface (for redundancy) to the storage network so that it's recognized by the cluster?  

 

If IPV6 is enabled on a storage interface when the cluster was created, would it be a problem to disable it now?

 

Cheers,

Mike

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

 

This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential ***
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux