Re: Getting placement groups to place evenly (again)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 22, 2015 at 2:16 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> Uh, looks like it's the contents of the "omap" directory (inside of
> "current") are the levelDB store. :)

OK, here's du -sk of all of those:

36740 ceph-0/current/omap
35736 ceph-1/current/omap
37356 ceph-2/current/omap
38096 ceph-3/current/omap
30132 ceph-10/current/omap
32260 ceph-11/current/omap
33488 ceph-4/current/omap
37872 ceph-9/current/omap
35552 ceph-5/current/omap
39524 ceph-6/current/omap
34796 ceph-7/current/omap
32580 ceph-8/current/omap
36092 ceph-12/current/omap
34460 ceph-17/current/omap
28780 ceph-18/current/omap
36360 ceph-21/current/omap
41356 ceph-13/current/omap
40344 ceph-15/current/omap
38068 ceph-19/current/omap
31908 ceph-22/current/omap
34676 ceph-14/current/omap
33964 ceph-16/current/omap
42872 ceph-20/current/omap
39252 ceph-23/current/omap
39452 ceph-24/current/omap
42984 ceph-25/current/omap
38492 ceph-26/current/omap
40188 ceph-27/current/omap
35052 ceph-28/current/omap
42900 ceph-29/current/omap
37368 ceph-30/current/omap
42924 ceph-31/current/omap
39708 ceph-32/current/omap
42692 ceph-33/current/omap
37628 ceph-34/current/omap
30868 ceph-35/current/omap
46088 ceph-36/current/omap
39672 ceph-37/current/omap

At a glance, they appear to be roughly proportional to the amount of
data on the OSD.  E.g. ceph-35 is the 60% OSD and it also has the
smallest omap directory.  But the overall size of these (30-45MiB),
doesn't seem like a big issue affecting overall available space.  So,
I think it is as you suspected and this is not the problem.

> Ah — I think you might be suffering from some of the issues that
> prompted the creation of the straw2 algorithm, since you have two
> close-but-different OSD sizes, a bunch of same-sized hosts, and one
> that's different.

This problem does predate the addition of the 10-disk node.  I.e. it
existed even when the cluster was 7 nodes with 4 identical OSD's each.

> I could be wrong, but whenever you do upgrade to hammer you might want
> to pay the data movement price of making that change. (There are
> discussions in the release notes and elsewhere about this that you can
> look up.)

Will do, but upgrading to hammer sounds like it exceeds our risk
threshold at the present time. :-(

Thanks!  Your help looking into this is much appreciated!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux