How to create multiple OSD's per host?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've tried using ceph-deploy but it wants to assign the same id for each osd and I end up with a bunch of "prepared" ceph-disk's and only 1 "active". If I use the manual "short form" method the activate step fails and there are no xfs mount points on the ceph-disks. If I use the manual "long form" it seems like I'm the closest to getting active ceph-disks/osd's but the monitor always shows the osds as "down/in" and the ceph-disks don't persist over a boot cycle.

Is there a document anywhere that anyone knows of that explains a step by step process for bringing up multiple osd's per host - 1 hdd with ssd journal partition per osd?
Thanks,
Bruce
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140814/165fcd30/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux