Strange Client admin socket error in a containerized ceph environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I keep getting the following error message:

018-08-30 18:52:37.882 7fca9df7c700 -1 asok(0x7fca98000fe0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists

Otherwise things seem to be fine

I am running mimic 13.2.1 deployed with ceph-ansible and running in docker containers

 s -latr /var/run/ceph/ceph-*

srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.10.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.25.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.1.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.19.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.13.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.16.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.22.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:11 /var/run/ceph/ceph-osd.4.asok
srwxr-xr-x 1 167 167 0 Aug 30 16:12 /var/run/ceph/ceph-osd.7.asok
srwxr-xr-x 1 167 167 0 Aug 30 17:53 /var/run/ceph/ceph-mds.storage1n1-chi.asok srwxr-xr-x 1 167 167 0 Aug 30 18:16 /var/run/ceph/ceph-mon.storage1n1-chi.asok srwxr-xr-x 1 167 167 0 Aug 30 18:40 /var/run/ceph/ceph-mgr.storage1n1-chi.asok
srwxr-xr-x 1 167 167 0 Aug 30 18:43 /var/run/ceph/ceph-client.admin.asok
srwxr-xr-x 1 167 167 0 Aug 30 18:51 /var/run/ceph/ceph-client.rgw.storage1n1-chi.asok

The file /var/run/ceph/ceph-client.admin.asok only shows on 1 node as well as the error. I also have status

ceph -s
2018-08-30 18:57:49.673 7f76457c9700 -1 asok(0x7f7640000fe0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17) File exists
  cluster:
    id:     7f2fcb31-655f-4fb5-879a-8d1f6e636f7a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum storage1n1-chi,storage1n2-chi,storage1n3-chi
    mgr: storage1n1-chi(active), standbys: storage1n3-chi, storage1n2-chi
    mds: cephfs-1/1/1 up  {0=storage1n3-chi=up:active}, 2 up:standby
    osd: 27 osds: 27 up, 27 in
    rgw: 3 daemons active

  data:
    pools:   7 pools, 608 pgs
    objects: 213  objects, 5.5 KiB
    usage:   892 GiB used, 19 TiB / 20 TiB avail
    pgs:     608 active+clean

This is a new cluster with no data yet. I have dashboard enabled on manager which runs on the node that displays the error.

Any help is greatly appreciated.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux