Re: [ceph-users] un-even data filled on OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In my cluster:
 351 OSDs with same size and 8192 pgs per pool. And 60% RAW space used.

Thanks
Swami


On Tue, Jun 7, 2016 at 7:22 PM, Corentin Bonneton <list@xxxxxxxx> wrote:
> Hello,
> You how much your PG pools since he first saw you have left too big.
>
> --
> Cordialement,
> Corentin BONNETON
>
>
> Le 7 juin 2016 à 15:21, Sage Weil <sage@xxxxxxxxxxxx> a écrit :
>
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>
> OK, understood...
> To fix the nearfull warn, I am reducing the weight of a specific OSD,
> which filled >85%..
> Is this work-around advisable?
>
>
> Sure.  This is what reweight-by-utilization does for you, but
> automatically.
>
> sage
>
>
> Thanks
> Swami
>
> On Tue, Jun 7, 2016 at 6:37 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>
> Hi Sage,
>
> Jewel and the latest hammer point release have an improved
> reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> run) to correct this.
>
>
> Thank you....But not planning to upgrade the cluster soon.
> So, in this case - are there any tunable options will help? like
> "crush tunable optimal" or so?
> OR any other configuration options change will help?
>
>
> Firefly also has reweight-by-utilization... it's just a bit less friendly
> than the newer versions.  CRUSH tunables don't generally help here unless
> you have lots of OSDs that are down+out.
>
> Note that firefly is no longer supported.
>
> sage
>
>
>
>
> Thanks
> Swami
>
>
> On Tue, Jun 7, 2016 at 6:00 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>
> On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
>
> Hello,
> I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
> with >85% of data and few OSDs filled with ~60%-70% of data.
>
> Any reason why the unevenly OSDs filling happned? do I need to any
> tweaks on configuration to fix the above? Please advise.
>
> PS: Ceph version is - 0.80.7
>
>
> Jewel and the latest hammer point release have an improved
> reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
> run) to correct this.
>
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux