Re: RFC - "Connection Groups" concept

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/27/2013 10:58 AM, Stephan von Krawczynski wrote:
On Thu, 27 Jun 2013 10:37:23 -0400
Joe Landman <landman@xxxxxxxxxxxxxxxxxxxxxxx> wrote:


No.  One of the largest issues we and our customers have been having for
years has been tightly tying gluster volume creation to single IP
addresses.  This makes multihomed usage, well, problematic, at best.
Worse than this, is the use of the DNS name (or other name) which,
exactly as Jeff indicates, tightly ties the brick/mount point to a
particular interface.

Please explain your terminology. "Multihomed" in a provider context means you
have multiple external connections with _static_ IPs (maybe same AS, maybe
different). Do you use "multihomed" as synonym for "dynamic IP" here?
If so let me ask if you think that the vast majority of the users of a
filesystem use _server nodes_ with dynamic IPs?

I gather from your question that you have not set up a multi-homed system with multiple network connections to different elements of a network with a common storage pool shared amongst the multiple independent subnets.

Short version:

case 1: Brick defined to be x.y.z.t is invisible from other NIC a.b.c.d unless you do some heroic routing. Which means using IP addresses for setting up multiply connected network bricks is a complete and absolute non-starter.

case 2: Brick defined to be brick.stora.ge using DNS mapping. For each net connecting via a separate NIC, you need a *different* DNS response, which forces you either into split horizon or multi-horizon DNS if you want to have a prayer of making this work "right". Just don't do a DNS update though, because its not atomic and "Bad Things Will Happen(TM)".

I can't figure out why you went to the dynamic IP.  No matter.

UUIDs that handle the mapping for us, so we don't have to worry about effectively breaking other technologies to accomodate this are to be strongly encouraged.

As Joe noted in a separate response, the correct way to handle this is with objects, where UUIDs identify the objects and you can attach object metadata which when provided to the servers/native clients, helps these issues.

FWIW, we'd filed a bug w.r.t. the multi-homed issues several years ago (pre-RedHat days). I think it was eventually closed without being fixed, but the net of it is that the same issues we discussed then are being discussed now, and are as important, if not more important as you scale up and have multiple networks attaching to the storage. You need a scalable mechanism for growth and management, and it needs to abstract some of the underlying bits. UUIDs do this (when they are object identifiers).

[request]
If at all possible, please keep these or make these objects JSON objects, so we can interact with them programmaticly.
[/request]






--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman@xxxxxxxxxxxxxxxxxxxxxxx
web  : http://scalableinformatics.com
       http://scalableinformatics.com/siflash
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux