Help: Balancing Ceph OSDs with different capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I have recently onboarded new OSDs into my Ceph Cluster. Previously, I had
44 OSDs of 1.7TiB each and was using it for about a year. About 1 year ago,
we onboarded an additional 20 OSDs of 14TiB each.

However I observed that many of the data were still being written onto the
original 1.7TiB OSDs instead of the 14TiB ones. Overtime, this caused a
bottleneck as the 1.7TiB OSDs reached nearfull capacity.

I have tried to perform a reweight (both crush reweight and reweight) to
reduce the number of PGs on each 1.7TiB. This worked temporarily but
resulted in many objects being misplaced and PGs being in a Warning state.

Subsequently I have also tried using crush-compat balancer mode instead of
upmap but did not see significant improvement. The latest changes I made
was to change backfill-threshold to 0.85, hoping that PGs will no longer be
assigned to OSDs that are >85% utilization. However, this did not change
the situation much as I see many OSDs above >85% utilization today.

Attached is a report from ceph report command. For now I have OUT-ed two of
my OSDs which have reached 95% capacity. I would greatly appreciate it if
someone can provide advice on this matter.

Thanks
Jasper Tan


--

-- 


--


*The contents of this e-mail message and any attachments are 
confidential and are intended solely
for addressee. The information may 
also be legally privileged. This transmission is sent in trust, for
the 
sole purpose of delivery to the intended recipient. If you have received 
this transmission in error,
any use, reproduction or dissemination of this 
transmission is strictly prohibited. If you are not the
intended recipient, 
please immediately NOTIFY the sender by reply e-mail or phone and DELETE
this message and its attachments, if any.*
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux