Re: un-even data filled on OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,
>Jewel and the latest hammer point release have an improved
>reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> run) to correct this.

Thank you....But not planning to upgrade the cluster soon.
So, in this case - are there any tunable options will help? like
"crush tunable optimal" or so?
OR any other configuration options change will help?


Thanks
Swami


On Tue, Jun 7, 2016 at 6:00 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>> Hello,
>> I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
>> with >85% of data and few OSDs filled with ~60%-70% of data.
>>
>> Any reason why the unevenly OSDs filling happned? do I need to any
>> tweaks on configuration to fix the above? Please advise.
>>
>> PS: Ceph version is - 0.80.7
>
> Jewel and the latest hammer point release have an improved
> reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> run) to correct this.
>
> sage
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux