Re: un-even data filled on OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Blair - Thanks for the script...Btw, is this script has option for dry run?

Thanks
Swami

On Wed, Jun 8, 2016 at 6:35 AM, Blair Bethwaite
<blair.bethwaite@xxxxxxxxx> wrote:
> Swami,
>
> Try https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py,
> that'll work with Firefly and allow you to only tune down weight of a
> specific number of overfull OSDs.
>
> Cheers,
>
> On 7 June 2016 at 23:11, M Ranga Swami Reddy <swamireddy@xxxxxxxxx> wrote:
>> OK, understood...
>> To fix the nearfull warn, I am reducing the weight of a specific OSD,
>> which filled >85%..
>> Is this work-around advisable?
>>
>> Thanks
>> Swami
>>
>> On Tue, Jun 7, 2016 at 6:37 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>>> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>>>> Hi Sage,
>>>> >Jewel and the latest hammer point release have an improved
>>>> >reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
>>>> > run) to correct this.
>>>>
>>>> Thank you....But not planning to upgrade the cluster soon.
>>>> So, in this case - are there any tunable options will help? like
>>>> "crush tunable optimal" or so?
>>>> OR any other configuration options change will help?
>>>
>>> Firefly also has reweight-by-utilization... it's just a bit less friendly
>>> than the newer versions.  CRUSH tunables don't generally help here unless
>>> you have lots of OSDs that are down+out.
>>>
>>> Note that firefly is no longer supported.
>>>
>>> sage
>>>
>>>
>>>>
>>>>
>>>> Thanks
>>>> Swami
>>>>
>>>>
>>>> On Tue, Jun 7, 2016 at 6:00 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>>>> > On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>>>> >> Hello,
>>>> >> I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
>>>> >> with >85% of data and few OSDs filled with ~60%-70% of data.
>>>> >>
>>>> >> Any reason why the unevenly OSDs filling happned? do I need to any
>>>> >> tweaks on configuration to fix the above? Please advise.
>>>> >>
>>>> >> PS: Ceph version is - 0.80.7
>>>> >
>>>> > Jewel and the latest hammer point release have an improved
>>>> > reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
>>>> > run) to correct this.
>>>> >
>>>> > sage
>>>> >
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
> Cheers,
> ~Blairo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux