Ceph's data distribution is the crush algorithm. While your use case is simple, the algorithm is very complex to handle complex scenarios. The variable you have access to is the
crush weight for each osd. If you have an osd, like ceph-3 that has more data than the rest and ceph-2 that has less data than the rest, then you can increase the crush weight to have the crush algorithm assign more placement groups to that osd or inversly
reduce the crush weight to have the osd lose some placement groups.
Like has already been mentioned, to do this by hand you would use the command `ceph osd crush reweight osd.<osd_id> <new_weight>`. You don't want to adjust these weights by more than 0.05 or so each time. Every time you change the weight of something in the crush map, your cluster will start backfilling until the data is where the updated crush map says it will be. To have Ceph try to do this for you, you would use `ceph osd reweight-by-utilization`. I don't have experience with this method, but a lot of the community uses it.
From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Volkov Pavel [volkov@xxxxxxxxxx]
Sent: Monday, December 05, 2016 3:11 AM To: 'John Petrini' Cc: 'ceph-users' Subject: Re: [ceph-users] ceph - even filling disks OSD different sizes are used for different tasks. Such as cache. My concern is 4TB OSD, used as a storage pool. Place them engaged is not the same.
/Dev/sdf1 4.0T 1.7T 2.4T 42% / var / lib / ceph / osd / ceph-4 /Dev/sdd1 4.0T 1.7T 2.4T 41% / var / lib / ceph / osd / ceph-2 /Dev/sdb1 4.0T 1.9T 2.2T 46% / var / lib / ceph / osd / ceph-0 /Dev/sde1 4.0T 2.1T 2.0T 51% / var / lib / ceph / osd / ceph-3 /Dev/sdc1 4.0T 1.8T 2.3T 45% / var / lib / ceph / osd / ceph-1
For example, on /dev/sdd1 42%, and /dev/sde1 51%. They are 1 pool. Maybe there is an option which can be used to fill evenly OSD?
From: John Petrini [mailto:jpetrini@xxxxxxxxxxxx]
You can reweight the OSD's either automatically based on utilization (ceph osd reweight-by-utilization) or by hand.
See:
It's probably not ideal to have OSD's of such different sizes on a node.
___ John Petrini NOC Systems Administrator // CoreDial, LLC // coredial.com // The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
On Fri, Dec 2, 2016 at 12:36 AM, Волков Павел (Мобилон) <volkov@xxxxxxxxxx> wrote: Good day. I have set up the repository ceph and created several pools on the hdd 4TB. My problem lies in uneven filling hdd.
root@ceph-node1:~# df -H Filesystem Size Used Avail Use% Mounted on /dev/sda1 236G 2.7G 221G 2% / none 4.1k 0 4.1k 0% /sys/fs/cgroup udev 30G 4.1k 30G 1% /dev tmpfs 6.0G 1.1M 6.0G 1% /run none 5.3M 0 5.3M 0% /run/lock none 30G 8.2k 30G 1% /run/shm none 105M 0 105M 0% /run/user /dev/sdf1 4.0T 1.7T 2.4T 42% /var/lib/ceph/osd/ceph-4 /dev/sdg1 395G 329G 66G 84% /var/lib/ceph/osd/ceph-5 /dev/sdi1 195G 152G 44G 78% /var/lib/ceph/osd/ceph-7 /dev/sdd1 4.0T 1.7T 2.4T 41% /var/lib/ceph/osd/ceph-2 /dev/sdh1 395G 330G 65G 84% /var/lib/ceph/osd/ceph-6 /dev/sdb1 4.0T 1.9T 2.2T 46% /var/lib/ceph/osd/ceph-0 /dev/sde1 4.0T 2.1T 2.0T 51% /var/lib/ceph/osd/ceph-3 /dev/sdc1 4.0T 1.8T 2.3T 45% /var/lib/ceph/osd/ceph-1
On the test machine, this leads to an overflow error CDM and further incorrect operation. How to make that all hdd filled equally?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com