Re: Uneven data placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/17/2013 06:46 AM, Andrey Korolyov wrote:
Hi,

from osd tree:

-16     4.95                    host 10.5.0.52
32      1.9                             osd.32  up      2
33      1.05                            osd.33  up      1
34      1                               osd.34  up      1
35      1                               osd.35  up      1

df -h:
/dev/sdd3 3.7T  595G  3.1T  16% /var/lib/ceph/osd/32
/dev/sde3 3.7T  332G  3.4T   9% /var/lib/ceph/osd/33
/dev/sdf3 3.7T  322G  3.4T   9% /var/lib/ceph/osd/34
/dev/sdg3 3.7T  320G  3.4T   9% /var/lib/ceph/osd/35

-10     2                       host 10.5.0.32
18      1                               osd.18  up      1
26      1                               osd.26  up      1

df -h:
/dev/sda2 926G  417G  510G  45% /var/lib/ceph/osd/18
/dev/sdb2 926G  431G  496G  47% /var/lib/ceph/osd/26

Since osds on 10.5.0.32 does not contain garbage bytes almost for
sure, seems to be some weirdness in the placement. Crush rules are
almost default, there is no adjustment by node subsets. Any thoughts
will be appreciated!

Hi Andrey,

How many PGs do you have in your pools?

Mark

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux