admin socket for OpenStack client vanishes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am trying to get detailed information about the RBD images used by OpenStack (r/w operations, throughput, ..).

On the mailing list I found instructions that this is possible using an admin socket of the client [1]. So I enabled the socket on one of my hosts according to [2]. The manual states that the socket should be there once I restart the VM. At some point it actually does appear, but it vanishes within a second or two. If I keep monitoring the directory I see it appearing for roughly 1-2 seconds per minute.

The socket looks like this:
root@compute01:/var/run/ceph/guests# ls -l
srwxr-xr-x 1 cinder cinder  0 Aug 29 17:54 ceph-client.cinder.2772108.94507439454256.asok

Does anyone know what I am doing wrong?

Or is there another way to get information about which RBD image is causing the most load on a cluster?

Regards,
Georg


[1] http://webcache.googleusercontent.com/search?q=cache%3Ahttp%3A%2F%2Flists.ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2018-July%2F028408.html By the way, the mail archive from before 2019 seems to be inaccessible. Using google cache as a fallback.

[2] https://docs.ceph.com/docs/mimic/rbd/rbd-openstack/#configuring-nova
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux