Re: Messenger V2: multiple bind support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 1, 2017 at 7:43 PM Sage Weil <sage@xxxxxxxxxxxx> wrote:
>
> On Wed, 1 Nov 2017, Gregory Farnum wrote:
> > On Wed, Oct 25, 2017 at 7:43 AM Joao Eduardo Luis <joao@xxxxxxx> wrote:
> > >
> > > On 10/24/2017 03:36 PM, Ricardo Dias wrote:
> > > > Hi list,
> > > >
> > > > I was wondering if it makes sense to support multiple binds with a
> > > > single messenger instance.
> > > >
> > > > The use case I have in mind is when there are multiple public/cluster
> > > > networks specified in ceph.conf and we want to listen for connections
> > > > in one interface of each network. With the support for multiple binds,
> > > > we could use a single messenger instance to listen to all interfaces.
> > > >
> > > > Do you think it is worth to implement such support or is the above use
> > > > case easily handled by having multiple messenger instances?
> > >
> > > I would think multiple binds would make sense, but how would this affect
> > > setting the policies? Would we be sharing a single policy across
> > > multiple binds, or would we be allowed different policies for different
> > > binds?
> > >
> > > Regardless, and not knowing the code, wouldn't this make sense for
> > > multiplexing multiple daemons on a single messenger? Making the
> > > messenger sort of a blackbox, that everyone could just use seamlessly
> > > without having to figure too many things out?
> > >
> > > E.g., instead of multiplexing several osds on a messenger for the public
> > > network, and then the same set of osds on a messenger for the cluster
> > > network, we could simply have one messenger (bound to public and
> > > cluster) and have just that one messenger for all the osds.
> > >
> > > Anyway, from an abstraction point-of-view it makes sense to me; no idea
> > > how feasible that would be though.
> >
> > There's sort of two pieces here, right?
> > 1) From the Dispatcher perspective, we want a single "Messenger" that
> > it talks to. (At least, I'm pretty sure. Unless we need to be making
> > routing decisions using internal data? I really hope that never comes
> > up.) That could be set up by having multiple Messenger implementations
> > behind a "UnifyingMessenger" abstraction layer with pretty minimal
> > changes.
> >
> > 2) From an implementation perspective, which data structures do we end
> > up needing to share in order to have a daemon listening on multiple
> > ports?
> >
> > So, from the SimpleMessenger perspective, I don't think we get much
> > out of trying to build multi-bind into it. We'd basically get to
> > eliminate the duplicated DIspatchQueue, which is not very interesting.
> > (Modulo some flow control stuff, perhaps.) From an AsyncMessenger
> > implementation perspective, that calculus might shift since we're
> > running on a thread pool, but I'm not deeply familiar with the code
> > internals.
>
> I'm afraid that the problem that's going to come up is the peer_addr
> hash_map<> for stateful_peer connections (osd to osd, etc.).  This is down
> in the messenger layer and is the main piece of shared state left (most
> everything else works with Connection refs).  I'm worried that we'll want
> or need to pull that up out of the messenger into the consumer (e.g., a
> hash_map<int,ConnectionRef> osd_peers in OSDService) and refactor the
> connection race handling a bit.  I think that will ultimately solve some
> bugs and will simplify the Messenger greatly, but it's a painful refactor.


You just mean here when we're selecting the correct network to use for
OSD<->OSD communications? Hmm, I'm tempted to say we could do
something pretty simple by setting a priority for the different
addresses, but you're right, there are some issues with reconnect
especially in cases where one network stops working.

>
>
> > But I'm curious what brings this up. I thought you were working on the
> > v2 protocol — does something in that somehow hinge on the server-side
> > implementation of multiple networks?
>
> The v2 protocol will run on another port and we want to answer on the old
> port+protocol for old clients too.


This is a rather narrower case than the above though, right? Because
we don't need to worry about cross-messenger reconnects (the client
will be old and use the old messenger/port, or new and use the new
one) so we can put in a very simple shim.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux