Re: trouble with ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 29 Aug 2013, Pavel Timoschenkov wrote:
> 
> Hi.
> New trouble with ceph-deploy. When i'm executing:
> 
> ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
> ceph-deploy osd activate ceph001:sdaa:/dev/sda1
> or
> ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
> ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1

Is the intention to use a partition on sdaa for data and one for the 
journal?  In that case, you just want

 ceph-deploy osd create ceph001:sdaa

There might also still be an issues with the odd device name, but I'm 
pretty sure it was sorted out; let us know if the above works.

Thanks!
sage


> 
> OSD not created:
> 
> ceph -k ceph.client.admin.keyring -s
>   cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
>    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
>    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2,
> quorum 0 ceph001
>    osdmap e1: 0 osds: 0 up, 0 in
>     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB
> avail
>    mdsmap e1: 0/0/1 up
> 
> ceph -k ceph.client.admin.keyring osd tree
> # id    weight  type name       up/down reweight
> -1      0       root default
> 
> but if i'm creating folder for ceph data and executing:
> 
> ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> 
> OSD created:
> 
> ceph -k ceph.client.admin.keyring -s
>   cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
>    health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
>    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2,
> quorum 0 ceph001
>    osdmap e5: 1 osds: 1 up, 1 in
>     pgmap v6: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB
> avail
>    mdsmap e1: 0/0/1 up
> 
> ceph -k ceph.client.admin.keyring osd tree                   
> # id    weight  type name       up/down reweight
> -1      0.03999 root default
> -2      0.03999         host ceph001
> 0       0.03999                 osd.0   up      1
> 
> This is a bug or should I mount disks for data to some catalog?
> 
> 
> and more:
> The 'ceph-deploy osd create' construction don't work from me. Only
> 'prepare&activate'.
> 
> dpkg -s ceph-deploy
> Version: 1.2.1-1precise
> 
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux