Re: Full OSD questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't see any very bad.
Try rename your racks from numbers to unique string, for example change
rack 1 {
to
rack rack1 {

and i.e.

On 09.09.2013, at 23:56, Gaylord Holder wrote:

> Thanks for your assistance.
> 
> Crush map:
> 
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> 
> # devices
> device 0 osd.0
> device 1 osd.1
> device 2 osd.2
> device 3 osd.3
> device 4 osd.4
> device 5 osd.5
> device 6 osd.6
> device 7 osd.7
> device 8 osd.8
> device 9 osd.9
> device 10 osd.10
> device 11 osd.11
> 
> # types
> type 0 osd
> type 1 host
> type 2 rack
> type 3 row
> type 4 room
> type 5 datacenter
> type 6 root
> 
> # buckets
> host celestia {
> 	id -2		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.0 weight 1.900
> 	item osd.1 weight 1.900
> 	item osd.2 weight 1.900
> 	item osd.3 weight 1.900
> }
> rack 1 {
> 	id -3		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item celestia weight 7.600
> }
> host luna {
> 	id -4		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.5 weight 1.900
> 	item osd.6 weight 1.900
> 	item osd.7 weight 1.900
> 	item osd.4 weight 1.900
> }
> rack 2 {
> 	id -5		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item luna weight 7.600
> }
> host twilight {
> 	id -6		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item osd.8 weight 1.900
> 	item osd.10 weight 1.900
> 	item osd.11 weight 1.900
> 	item osd.9 weight 1.900
> }
> rack 3 {
> 	id -7		# do not change unnecessarily
> 	# weight 7.600
> 	alg straw
> 	hash 0	# rjenkins1
> 	item twilight weight 7.600
> }
> root default {
> 	id -1		# do not change unnecessarily
> 	# weight 22.800
> 	alg straw
> 	hash 0	# rjenkins1
> 	item 1 weight 7.600
> 	item 2 weight 7.600
> 	item 3 weight 7.600
> }
> 
> # rules
> rule data {
> 	ruleset 0
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take default
> 	step chooseleaf firstn 0 type host
> 	step emit
> }
> rule metadata {
> 	ruleset 1
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take default
> 	step chooseleaf firstn 0 type host
> 	step emit
> }
> rule rbd {
> 	ruleset 2
> 	type replicated
> 	min_size 1
> 	max_size 10
> 	step take default
> 	step chooseleaf firstn 0 type host
> 	step emit
> }
> 
> # end crush map
> 
> The full osds are 2 and 10.
> 
> -Gaylord
> 
> On 09/09/2013 03:49 PM, Timofey wrote:> Show crush map please
> >
> > 09.09.2013, в 21:32, Gaylord Holder <gholder@xxxxxxxxxxxxx> написал(а):
> >
> >> I'm starting to load up my ceph cluster.
> >>
> >> I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
> >>
> >> rados df
> >>
> >> says I have 8TB free, but I have 2 nearly full OSDs.
> >>
> >> I don't understand how/why these two disks are filled while the others are relatively empty.
> >>
> >> How do I tell ceph to spread the data around more, and why isn't it already doing it?
> >>
> >> Thank you for helping me understand this system better.
> >>
> >> Cheers,
> >> -Gaylord
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> On 09/09/2013 03:49 PM, Timofey wrote:
>> Show crush map please
>> 
>> 09.09.2013, в 21:32, Gaylord Holder <gholder@xxxxxxxxxxxxx> написал(а):
>> 
>>> I'm starting to load up my ceph cluster.
>>> 
>>> I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
>>> 
>>> rados df
>>> 
>>> says I have 8TB free, but I have 2 nearly full OSDs.
>>> 
>>> I don't understand how/why these two disks are filled while the others are relatively empty.
>>> 
>>> How do I tell ceph to spread the data around more, and why isn't it already doing it?
>>> 
>>> Thank you for helping me understand this system better.
>>> 
>>> Cheers,
>>> -Gaylord
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux