OSDs down following ceph-deploy guide

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Trying out Ceph for the first time, following the installation guide using ceph-deploy. All goes well, "ceph -s" reports health as ok at the beginning, but shortly after it shows all placement groups as inactive, and the 2 osds are down and out.

I understand this could be for a variety of reasons. Quick question, I've read on another mail that you have to manually mount  the partitions on reboot and that this is not mentioned on the guide. I am testing this on a cloud provider and I wanted to test Ceph with 2 servers on the local filesystem e.g. I just created folders under /var/local/osd0 as in the tutorial without attaching a storage volume to the host and mounting it. Is this possible, or does Ceph always require partitions to be used for osds?

That would be a reason causing this failure.

Cheers
Dimitris
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux