Re: Getting placement groups to place evenly (again)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> Since I now realize you did a bunch of reweighting to try and make
> data match up I don't think you'll find something like badly-sized
> LevelDB instances, though.

It's certainly something I can check, just to be sure.  Erm, what does
a LevelDB instance look like?  After poking arond in the contents of
one of the OSD directories, I see what look like the PG directories
under current, but nothing labelled "LevelDB" jumps out.

> Final possibility which I guess hasn't been called out here is to make
> sure that your CRUSH map is good and actually expected to place things
> evenly. Can you share it?

Here is the crush map:

http://pastebin.com/yBtZFM6r

It is pretty default; I don't think we (are smart enough to) have done
anything custom to it.

> Since you've got 38 OSDs and 8 nodes some of the hosts are clearly
> different sizes; is there any correlation between which size the node
> is and how full its OSDs are?

Not really.  Of the eight nodes, seven have 4 OSDs and one (the very
newest) has 10.  Here's the OSD tree:

# id weight type name up/down reweight
-1 13.4 root default
-2 1.4 host f13
0 0.35 osd.0 up 0.8704
1 0.35 osd.1 up 1
2 0.35 osd.2 up 0.9035
3 0.35 osd.3 up 0.8978
-3 1.4 host f14
4 0.35 osd.4 up 1
9 0.35 osd.9 up 0.904
10 0.35 osd.10 up 0.7823
11 0.35 osd.11 up 1
-4 1.4 host f15
5 0.35 osd.5 up 0.9359
6 0.35 osd.6 up 0.9
7 0.35 osd.7 up 1
8 0.35 osd.8 up 0.85
-5 1.4 host f19
12 0.35 osd.12 up 0.9395
17 0.35 osd.17 up 1
18 0.35 osd.18 up 0.8853
21 0.35 osd.21 up 0.8863
-6 1.4 host f20
13 0.35 osd.13 up 1
15 0.35 osd.15 up 1
19 0.35 osd.19 up 0.9268
22 0.35 osd.22 up 0.9398
-7 1.4 host f21
14 0.35 osd.14 up 0.9157
16 0.35 osd.16 up 1
20 0.35 osd.20 up 0.9452
23 0.35 osd.23 up 0.9
-8 1.4 host f22
24 0.35 osd.24 up 1
25 0.35 osd.25 up 1
26 0.35 osd.26 up 0.75
27 0.35 osd.27 up 0.8424
-9 3.6 host f23
28 0.36 osd.28 up 1
29 0.36 osd.29 up 1
30 0.36 osd.30 up 1
31 0.36 osd.31 up 1
32 0.36 osd.32 up 0.9486
33 0.36 osd.33 up 1
34 0.36 osd.34 up 1
35 0.36 osd.35 up 1
36 0.36 osd.36 up 0.8206
37 0.36 osd.37 up 1

For the time being it seems like we have found the magic reweights.
Given the average utilization is at 79% we are not in a rush to add
more content until more nodes arrive, so things are holding fairly
steady with all but two OSD's between 71% and 85%.  The two outliers
are 87% and... 60%.

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux