Uniquely identifying a Ceph client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Is there a consistent, reliable way to identify a Ceph client? I'm looking for a string/ID (UUID, for example) that can be traced back to a client doing RBD maps.

There are a couple of possibilities out there, but they aren't quite what I'm looking for.  When checking "rbd status", for example, the output is the following:

# rbd status travis2
Watchers:
watcher=172.21.12.10:0/1492902152 client.4100 cookie=1
# rbd status travis3
Watchers:
watcher=172.21.12.10:0/1492902152 client.4100 cookie=2


The IP:port/nonce string is an option, and so is the "client.<num>" string, but neither of these is actually that helpful because they don't the same strings when an advisory lock is added to the RBD images. For example:

# rbd lock list travis2
There is 1 exclusive lock on this image.
Locker      ID     Address
client.4201 test 172.21.12.100:0/967432549
# rbd lock list travis3
There is 1 exclusive lock on this image.
Locker      ID     Address
client.4240 test 172.21.12.10:0/2888955091

Note that neither the nonce nor the client ID match -- so by looking at the rbd lock output, you can't match that information against the output from "rbd status". I believe this is because the nonce the client identifier is reflecting the CephX session between client and cluster, and while this is persistent across "rbd map" calls (because the rbd kmod has a shared session by default, though that can be changed as well), each call to "rbd lock" initiates a new session. Hence a new nonce and client ID.

That pretty much leaves the IP address. These would seem to be problematic as an identifier if the client happened to behind NAT.

I am trying to be able to definitely determine what client has an RBD mapped and locked, but I'm not seeing a way to guarantee that you've uniquely identified a client. Am I missing something obvious?

Perhaps my concern about NAT is overblown -- I've never mounted an RBD from a client that is behind NAT, and I'm not sure how common that would be (though I think it would work).

 - Travis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux