Re: issue with activate osd in ceph with new partition created

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is any errors disply when execute "ceph-deploy osd prepare" ?

Best wishes,
Mika

2014-10-31 17:36 GMT+08:00 Subhadip Bagui <i.bagui@xxxxxxxxx>:
Hi,

Can anyone please help on this

Regards,
Subhadip


-------------------------------------------------------------------------------------------------------------------

On Fri, Oct 31, 2014 at 12:51 AM, Subhadip Bagui <i.bagui@xxxxxxxxx> wrote:
Hi,

I'm new in ceph and tying to install the cluster. I'm using single server for mon and osd. I've create one partition with device /dev/vdb1 containing 100 gb with ext4 fs and trying to add as an OSD in ceph monitor. But whenever I'm trying to activate the partition as osd block device we are getting issue. The partition can't be mount with ceph default osd mountpoint. Please let me know what I'm missing

[root@ceph-admin my-cluster]# ceph-deploy osd activate ceph-admin:vdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd activate ceph-admin:vdb1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-admin:/dev/vdb1:
[ceph-admin][DEBUG ] connected to host: ceph-admin
[ceph-admin][DEBUG ] detect platform information from remote host
[ceph-admin][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph-admin disk /dev/vdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph-admin][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /dev/vdb1
[ceph-admin][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph-admin][INFO  ] checking OSD status...
[ceph-admin][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph-admin][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph-admin][INFO  ] Running command: chkconfig ceph on

----

[root@ceph-admin my-cluster]# ceph status

2014-10-30 20:40:32.102741 7fcc7c591700  0 -- :/1003242 >> 10.203.238.165:6789/0 pipe(0x7fcc780204b0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fcc78020740).fault

2014-10-30 20:40:35.103348 7fcc7c490700  0 -- :/1003242 >> 10.203.238.165:6789/0 pipe(0x7fcc6c000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fcc6c000e90).fault

2014-10-30 20:40:38.103994 7fcc7c591700  0 -- :/1003242 >> 10.203.238.165:6789/0 pipe(0x7fcc6c003010 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fcc6c0032a0).fault

2014-10-30 20:40:41.104498 7fcc7c490700  0 -- :/1003242 >> 10.203.238.165:6789/0 pipe(0x7fcc6c0039d0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fcc6c003c60).fault


Regards,
Subhadip

-------------------------------------------------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux