ceph - even filling disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good day.

I have set up the repository ceph and created several pools on the hdd 4TB. My problem lies in uneven filling hdd.

 

root@ceph-node1:~# df -H

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda1       236G  2.7G  221G   2% /

none            4.1k     0  4.1k   0% /sys/fs/cgroup

udev             30G  4.1k   30G   1% /dev

tmpfs           6.0G  1.1M  6.0G   1% /run

none            5.3M     0  5.3M   0% /run/lock

none             30G  8.2k   30G   1% /run/shm

none            105M     0  105M   0% /run/user

/dev/sdf1       4.0T  1.7T  2.4T  42% /var/lib/ceph/osd/ceph-4

/dev/sdg1       395G  329G   66G  84% /var/lib/ceph/osd/ceph-5

/dev/sdi1       195G  152G   44G  78% /var/lib/ceph/osd/ceph-7

/dev/sdd1       4.0T  1.7T  2.4T  41% /var/lib/ceph/osd/ceph-2

/dev/sdh1       395G  330G   65G  84% /var/lib/ceph/osd/ceph-6

/dev/sdb1       4.0T  1.9T  2.2T  46% /var/lib/ceph/osd/ceph-0

/dev/sde1       4.0T  2.1T  2.0T  51% /var/lib/ceph/osd/ceph-3

/dev/sdc1       4.0T  1.8T  2.3T  45% /var/lib/ceph/osd/ceph-1

 

 

On the test machine, this leads to an overflow error CDM and further incorrect operation.

How to make that all hdd filled equally?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux