Re: Data distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/25/2011 08:48 PM, Martin Wilderoth wrote:
Hello

I have a ceph cluster of 6 osd 146gb each. I have copied a lot of data
filling to 87%. Between the osd's the data is not evenly distributed

host1
/dev/sdb              137G  119G   15G  90% /data/osd0
/dev/sdc              137G  126G  7.4G  95% /data/osd1

host2
/dev/sdc              137G  114G   21G  85% /data/osd2
/dev/sdd              137G  130G  3.6G  98% /data/osd3

host3
/dev/sdb              137G  107G   27G  81% /data/osd4
/dev/sdc              137G   98G   36G  74% /data/osd5

During the copy i got I/O error, but after restarting the cluster it seems fine.

By some reason osd3 seems to have much more data than osd5. Is there a way of geting the data distributed better ?.

Hi Martin,

Since the distribution is pseudo-random, you'll get some variance from an even split. You can reweight the osds manually with:

ceph osd reweight osd3 new_weight

or use the more automatic:

ceph osd reweight-by-utilization 110

This reduces the weight of all osds that have a utilization that is more than 110% of the average utilization.

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux