Re: osd not in tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 17 Nov 2012, Drunkard Zhang wrote:
> 2012/11/17 Sage Weil <sage@xxxxxxxxxxx>:
> > Hi,
> >
> > Okay, it looks something in the past added the host entry but for some
> > reason didn't give it a parent.  Did you previously modify the crush map
> > by hand, or did you only manipulate it via the 'ceph osd crush ...'
> > commands?
> >
> > Unfortuantely the fix is manually edit it.
> >
> > ceph osd getcrushmap -o /tmp/foo
> > crushtool -d /tmp/foo -o /tmp/foo.txt
> > edit foo.txt.  remove the host bucket (squid87-log13) and all of it's
> > children.
> > crushtool -c /tmp/foo.txt -o /tmp/foo.new
> > cpeh osd setcrushmap -i /tmp/foo.new
> >
> > and then you can re-run those 'ceph osd crush set ...' commands and you'll
> > be back in business.
> >
> Great, it works. now I remember I was trying to add host by hand while
> 'ceph osd crush set ...' failed (maybe command is wrong).
> 
> Another problem: is there any chance to change 'rack'? Or how to
> create a new rack? If I have to create a new rack first, I made
> mistake, or maybe buggy again.
> 
> Before change rack:
> log3 ~ # ceph osd tree
> dumped osdmap tree epoch 611
> # id weight type name up/down reweight
> -1 45 pool default
> -3 45 rack rack0205
> -2 33 host log3
> 0 3 osd.0 up 3
> 1 3 osd.1 up 3
> 2 3 osd.2 up 3
> 3 3 osd.3 up 3
> 4 3 osd.4 up 3
> 5 3 osd.5 up 3
> 6 3 osd.6 up 3
> 7 3 osd.7 up 3
> 8 3 osd.8 up 3
> 9 3 osd.9 up 3
> 10 3 osd.10 up 3
> -4 12 host squid87-log13
> 11 3 osd.11 up 3
> 12 3 osd.12 up 3
> 13 3 osd.13 up 3
> 14 3 osd.14 up 3
> 
> Looks reasonable, but log3 miss set rack, I wanna change rack=rack0205
> to rack=rack0206),  reset use this command, but not works:
> for i in {0..10}; do ceph osd crush set $i osd.$i 3 pool=data
> datacenter=dh-1L room=room1 row=02 rack=rack0206 host=log3; done

'ceph osd crush set ...' will only move the device itself; it won't move 
any of its parents.  If you want to move a non-leaf item in the tree, use 
'ceph osd crush move <name> <location ...>'.  Something like

 ceph osd crush move squid87-log13 rack=0206 pool=default
 ceph osd crush move log3 rack=0206 pool=default

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux