Re: un-even data filled on OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
You how much your PG pools since he first saw you have left too big.

--
Cordialement,
Corentin BONNETON


Le 7 juin 2016 à 15:21, Sage Weil <sage@xxxxxxxxxxxx> a écrit :

On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
OK, understood...
To fix the nearfull warn, I am reducing the weight of a specific OSD,
which filled >85%..
Is this work-around advisable?

Sure.  This is what reweight-by-utilization does for you, but
automatically.

sage


Thanks
Swami

On Tue, Jun 7, 2016 at 6:37 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
Hi Sage,
Jewel and the latest hammer point release have an improved
reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
run) to correct this.

Thank you....But not planning to upgrade the cluster soon.
So, in this case - are there any tunable options will help? like
"crush tunable optimal" or so?
OR any other configuration options change will help?

Firefly also has reweight-by-utilization... it's just a bit less friendly
than the newer versions.  CRUSH tunables don't generally help here unless
you have lots of OSDs that are down+out.

Note that firefly is no longer supported.

sage




Thanks
Swami


On Tue, Jun 7, 2016 at 6:00 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
On Tue, 7 Jun 2016, M Ranga Swami Reddy wrote:
Hello,
I have aorund 100 OSDs in my ceph cluster. In this a few OSDs filled
with >85% of data and few OSDs filled with ~60%-70% of data.

Any reason why the unevenly OSDs filling happned? do I need to any
tweaks on configuration to fix the above? Please advise.

PS: Ceph version is - 0.80.7

Jewel and the latest hammer point release have an improved
reweight-by-utilization (ceph osd test-reweight-by-utilization ... to dry
run) to correct this.

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux