Re: small cluster reboot fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



disregard. i did this on a cluster of test vms and didnt bother setting different hostnames, thus confusing ceph.

On Wed, Jul 29, 2015 at 2:24 AM pixelfairy <pixelfairy@xxxxxxxxx> wrote:
have a small test cluster (vmware fusion, 3 mon+osd nodes) all run ubuntu trusty. tried rebooting all 3 nodes and this happend.

root@ubuntu:~# ceph --version ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)

root@ubuntu:~# ceph health 2015-07-29 02:08:31.360516 7f5bd711a700 -1 asok(0x7f5bd0000bf0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/rbd-clients/ceph-client.admin.3282.140032308415712.asok': (2) No such file or directory HEALTH_WARN 64 pgs stuck unclean; recovery 512/1024 objects misplaced (50.000%); too few PGs per OSD (21 < min 30)
 
the osd disks are only 50gigs but they seemed to work fine before the reboot.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux