Re: Uniquely identifying a Ceph client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 1, 2016 at 11:45 AM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> On Tue, 1 Nov 2016, Travis Rhoden wrote:
>> Hello,
>> Is there a consistent, reliable way to identify a Ceph client? I'm looking
>> for a string/ID (UUID, for example) that can be traced back to a client
>> doing RBD maps.
>>
>> There are a couple of possibilities out there, but they aren't quite what
>> I'm looking for.  When checking "rbd status", for example, the output is the
>> following:
>>
>> # rbd status travis2
>> Watchers:
>> watcher=172.21.12.10:0/1492902152 client.4100 cookie=1
>> # rbd status travis3
>> Watchers:
>> watcher=172.21.12.10:0/1492902152 client.4100 cookie=2
>>
>>
>> The IP:port/nonce string is an option, and so is the "client.<num>" string,
>> but neither of these is actually that helpful because they don't the same
>> strings when an advisory lock is added to the RBD images. For example:
>
> Both are sufficient.  The <num> in client.<num> is the most concise and is
> unique per client instance.
>
> I think the problem you're seeing is actually that qemu is using two
> different librbd/librados instances, one for each mapped device?

Not using qemu in this scenario.  Just rbd map && rbd lock.  It's more
that I can't match the output from "rbd lock" against the output from
"rbd status", because they are using different librados instances.
I'm just trying to capture who has an image mapped and locked, and to
those not in the know, it would be a surprise that client.<num> and
client.<num2> are actually the same host. :)

I understand why it is, I was checking to see if there was another
field or indicator that I should use instead. I think I'm just going
to have to use the IP address, because that's the value that will have
real meaning to people.

Thanks!

>
>> # rbd lock list travis2
>> There is 1 exclusive lock on this image.
>> Locker      ID     Address
>> client.4201 test 172.21.12.100:0/967432549
>> # rbd lock list travis3
>> There is 1 exclusive lock on this image.
>> Locker      ID     Address
>> client.4240 test 172.21.12.10:0/2888955091
>>
>> Note that neither the nonce nor the client ID match -- so by looking at the
>> rbd lock output, you can't match that information against the output from
>> "rbd status". I believe this is because the nonce the client identifier is
>> reflecting the CephX session between client and cluster, and while this is
>> persistent across "rbd map" calls (because the rbd kmod has a shared session
>> by default, though that can be changed as well), each call to "rbd lock"
>> initiates a new session. Hence a new nonce and client ID.
>>
>> That pretty much leaves the IP address. These would seem to be problematic
>> as an identifier if the client happened to behind NAT.
>>
>> I am trying to be able to definitely determine what client has an RBD mapped
>> and locked, but I'm not seeing a way to guarantee that you've uniquely
>> identified a client. Am I missing something obvious?
>>
>> Perhaps my concern about NAT is overblown -- I've never mounted an RBD from
>> a client that is behind NAT, and I'm not sure how common that would be
>> (though I think it would work).
>
> It should work, but it's untested.  :)
>
> sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux