firefly osds stuck in state booting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The output that you have provided says that OSDs are not IN , Try the below

ceph osd in osd.0
ceph osd in osd.1

service ceph start osd.0
service ceph start osd.1

If you have 1 more host with 1 disk , add it , starting Ceph Firefly default rep size is 3


- Karan -

On 27 Jul 2014, at 11:17, 10 minus <t10tennn at gmail.com> wrote:

> Hi Sage, 
> 
> I have dropped all unset .. and even restarted the osd 
> No dice .. OSDs are still stuck .
> 
> 
> 
> --snip--
>  ceph daemon osd.0 status                                            ?rtt min/avg/max/mdev = 0.095/0.120/0.236/0.015 ms                                                                                                     
> { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",                            ?[root at ceph2 ~]#  ceph daemon osd.1 status                                                                                                             
>   "osd_fsid": "1ad28bde-c23c-44ba-a3b7-0aaaafd3372e",                                ?{ "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",                                                                                             
>   "whoami": 0,                                                                       ?  "osd_fsid": "becc3252-6977-47d6-87af-7b1337e591d8",                                                                                                 
>   "state": "booting",                                                                ?  "whoami": 1,                                                                                                                                        
>   "oldest_map": 1,                                                                   ?  "state": "booting",                                                                                                                                 
>   "newest_map": 24,                                                                  ?  "oldest_map": 1,                                                                                                                                    
>   "num_pgs": 0}                                                                      ?  "newest_map": 21,                                                                                                                                   
>  --snip--
> 
> --snip-- 
> ceph osd tree                                                                                                                      
> # id    weight  type name       up/down reweight
> -1      2       root default
> -3      1               host ceph1
> 0       1                       osd.0   down    0
> -2      1               host ceph2
> 1       1                       osd.1   down    0
> 
>  --snip--
> 
> --snip--
>  ceph -s
>     cluster 2929fa80-0841-4cb6-a133-90b2098fc802
>      health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
>      monmap e2: 3 mons at {ceph0=10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0}, election epoch 50, quorum 0,1,2 ceph0,ceph1,ceph2
>      osdmap e24: 2 osds: 0 up, 0 in
>       pgmap v25: 192 pgs, 3 pools, 0 bytes data, 0 objects
>             0 kB used, 0 kB / 0 kB avail
>                  192 creating
> --snip--
> 
> 
> 
> 
> On Sat, Jul 26, 2014 at 5:57 PM, Sage Weil <sweil at redhat.com> wrote:
> On Sat, 26 Jul 2014, 10 minus wrote:
> > Hi,
> >
> > I just setup a test ceph installation on 3 node Centos 6.5  .
> > two of the nodes are used for hosting osds and the third acts as mon .
> >
> > Please note I'm using LVM so had to set up the osd using the manual install
> > guide.
> >
> > --snip--
> > ceph -s
> >     cluster 2929fa80-0841-4cb6-a133-90b2098fc802
> >      health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean;
> > noup,nodown,noout flag(s) set
> >      monmap e2: 3 mons at{ceph0=10.0.12.220:6789/0,ceph1=10.0.12.221:6789/0,ceph2=10.0.12.222:6789/0
> > }, election epoch 46, quorum 0,1,2 ceph0,ceph1,ceph2
> >      osdmap e21: 2 osds: 0 up, 0 in
> >             flags noup,nodown,noout
>                     ^^^^
> 
> Do 'ceph osd unset noup' and they should start up.  You likely also want
> to clear nodown and noout as well.
> 
> sage
> 
> 
> >       pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
> >             0 kB used, 0 kB / 0 kB avail
> >                  192 creating
> > --snip--
> >
> > osd tree
> >
> > --snip--
> > ceph osd tree
> > # id    weight  type name       up/down reweight
> > -1      2       root default
> > -3      1               host ceph1
> > 0       1                       osd.0   down    0
> > -2      1               host ceph2
> > 1       1                       osd.1   down    0
> > --snip--
> >
> > --snip--
> >  ceph daemon osd.0 status
> > { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> >   "osd_fsid": "1ad28bde-c23c-44ba-a3b7-0aaaafd3372e",
> >   "whoami": 0,
> >   "state": "booting",
> >   "oldest_map": 1,
> >   "newest_map": 21,
> >   "num_pgs": 0}
> >
> > --snip--
> >
> > --snip--
> >  ceph daemon osd.1 status
> > { "cluster_fsid": "99babb8f-c880-4b32-a227-94aa483d4871",
> >   "osd_fsid": "becc3252-6977-47d6-87af-7b1337e591d8",
> >   "whoami": 1,
> >   "state": "booting",
> >   "oldest_map": 1,
> >   "newest_map": 21,
> >   "num_pgs": 0}
> > --snip--
> >
> > # Cpus are idling
> >
> > # does anybody know what is wrong
> >
> > Thanks in advance
> >
> >
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140728/b087e4bf/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux