have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu trusty. tried rebooting all 3 nodes and this happend.
root@ubuntu:~# ceph --version ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
root@ubuntu:~# ceph health 2015-07-29 02:08:31.360516 7f5bd711a700 -1 asok(0x7f5bd0000bf0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/rbd-clients/ceph-client.admin.3282.140032308415712.asok': (2) No such file or directory HEALTH_WARN 64 pgs stuck unclean; recovery 512/1024 objects misplaced (50.000%); too few PGs per OSD (21 < min 30)
the osd disks are only 50gigs but they seemed to work fine before the reboot.root@ubuntu:~# ceph --version ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
root@ubuntu:~# ceph health 2015-07-29 02:08:31.360516 7f5bd711a700 -1 asok(0x7f5bd0000bf0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/rbd-clients/ceph-client.admin.3282.140032308415712.asok': (2) No such file or directory HEALTH_WARN 64 pgs stuck unclean; recovery 512/1024 objects misplaced (50.000%); too few PGs per OSD (21 < min 30)
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com