This is another error I get while trying to activate disk – [ceph@MYOPTPDN16 ~]$ sudo ceph-disk activate /dev/sdl1 2016-06-29 11:25:17.436256 7f8ed85ef700 0 -- :/1032777 >> 10.115.1.156:6789/0 pipe(0x7f8ed4021610 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ed40218a0).fault 2016-06-29 11:25:20.436362 7f8ed84ee700 0 -- :/1032777 >> 10.115.1.156:6789/0 pipe(0x7f8ec4000c00 sd=6 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8ec4000e90).fault ^Z [2]+ Stopped sudo ceph-disk activate /dev/sdl1 Best Regards, Ranjit +91-9823240750 From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Pisal, Ranjit Dnyaneshwar Hi, I am stuck at one point to new OSD Host to existing ceph cluster. I tried a multiple combinations for creating OSDs on new host but every time its failing while disk activation
and no partition for OSD (/var/lib/ceph/osd/ceoh-xxx) is getting created instead (/var/lib/ceph/tmp/bhbjnk.mnt) temp partition is created. The host I have is combination of SSD and SAS disks. SSDs are parted to use for Journaling purpose. The sequence I tried
to add the new host as follows - 1. Ceph-rpms installed on new Host After this I also tried to install ceph-deploy and prepare new host using below commands and repeated above steps but it still failed at same point of disk activation. ceph-deploy install new Host
Attached logs for reference. Please assist with any known workaround/resolution.
Thanks
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com