Re: dealing with the full osd / help reweight

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/29/2016 11:35 AM, Christian Balzer wrote:

Hello,

On Tue, 29 Mar 2016 10:32:35 +0200 Jacek Jarosiewicz wrote:


I very specifically and intentionally wrote "ceph osd crush reweight" in
my reply above.
While your current state of affairs is better, it is not permanent ("ceph
osd reweight" settings are lost if an OSD is set out) and what I outlined
should have left you with nearly perfect CRUSH weight ratios.

Oh well, since you're already far down that path, continue until the
respective ratios aka %USE in the output above are as close/similar to
each other as possible.


Yes, yes, I'm aware of that, but when I've read Your message first, we were already down the temporary path and mostly focused on bringing up the full osd. We will adjust the crush weights according to disk capacities when we have a healthy cluster.


You might want to disable scrubbing (normal one) as well for the duration.


done.


I'd like to set the crush weights to correct values (size in TB) - all
in one move - but I'm afraid it will result in a lot of data movement.

If your ratios are correct at that time, it will be very little, the heavy
lifting is mostly what you're doing now.


ok, that's nice.

So - assuming all goes well and the cluster will be in HEALTH_OK state
within a day or two - what would You recommend doing first - increasing
the pgs on the pools with most data (and is it safe to go from a low
number like 64 to 1024 in one step, or should we do this step by step -
by factor of two)?

Recent versions of Ceph won't allow you to do large increases anyway
(doubling at most I think), so obviously the later.
And yes, this will cause MASSIVE data movement, but it will also reduce
the amount of data moving around (smaller PGs) in the last step.
I would do this first.


ok, I'll start with the pg increase before crush re-weight, doubling it each time.

Or should we first adjust crush weights and then increase pgs?
When adjusting crush weights should we reset the "reweight" to 1.0 or
should it be set to the number of TBs per drive as well?

"Subcommand reweight reweights osd to 0.0 < <weight> < 1.0."
As I said, shouldn't have used that, it's a temporary crutch at best.

So as I wrote originally, set nobackfill, adjust all crush weights, set
all osd reweighs to 1, unset nobackfill and enjoy the show.
Which will be a tiny show the closer to equal your ratios were before
that, see above.

Christian


OK, all clear now. Thanks for the help!

J

--
Jacek Jarosiewicz
Administrator Systemów Informatycznych

----------------------------------------------------------------------------------------
SUPERMEDIA Sp. z o.o. z siedzibą w Warszawie
ul. Senatorska 13/15, 00-075 Warszawa
Sąd Rejonowy dla m.st.Warszawy, XII Wydział Gospodarczy Krajowego Rejestru Sądowego,
nr KRS 0000029537; kapitał zakładowy 44.556.000,00 zł
NIP: 957-05-49-503
Adres korespondencyjny: ul. Jubilerska 10, 04-190 Warszawa

----------------------------------------------------------------------------------------
SUPERMEDIA ->   http://www.supermedia.pl
dostep do internetu - hosting - kolokacja - lacza - telefonia
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux