Re: Automate PGs calculation in Ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Nov 18, 2016, at 6:11 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> 
> On Fri, Nov 18, 2016 at 1:48 PM, Sebastien Han <shan@xxxxxxxxxx> wrote:
>> Thanks John that makes sense, since we miss user's overall intent
>> there is no point of doing this within the mons :).
>> It would be nice to have something a bit more dynamic though, where we
>> can decrease PGs based on the topology and the current number of
>> pools... Of course this will trigger movements but this is not an
>> issue for Ceph :).
> 
> Yeah, being able to decrease pg_num (i.e. do pg joins on the OSDs)
> would solve a lot of usability problems: currently we have no choice
> but to educate users about PGs on day 1 in order for them to decide
> how many they want in a pool.  If we could auto-guess, and then
> decrease later if we get it wrong, then most users would not need to
> know what a PG is to do normal operations.
> 
> I would envision any auto-decrease in pg_num working in concert with a
> guided pool creation process; as well as recommending how many pgs new
> pools should have, we would recommend any decrease in existing pools,
> and provide information about the expected cost of the associated data
> movement.
> 
> John
> 

This auto adjusting pg_num logic is a great idea. I would love to see this happen.

Nitin

>> 
>> On Thu, Nov 17, 2016 at 10:02 PM, John Spray <jspray@xxxxxxxxxx> wrote:
>>> On Thu, Nov 17, 2016 at 6:14 PM, Sebastien Han <shan@xxxxxxxxxx> wrote:
>>>> Hey,
>>>> 
>>>> I was wondering, since we have PG calc online to decide the right
>>>> value for our PGs per pool. The logic seems to be easy and well
>>>> understood. Looks like a simple calculation, I don't see anything like
>>>> depending on a use case we are changing the value or anything.
>>>> 
>>>> Any reason why this logic is not part of Ceph?
>>>> Mons know about everything on the cluster and since they manage pool
>>>> creations they should be able to take the right decision.
>>>> 
>>>> Am I missing something?
>>> 
>>> I think the reason nobody has done this so far is that there's no
>>> overall (multiple pools at a time) setup interface in Ceph.  To do the
>>> pg calc stuff, you need a user to tell you not just that they want a
>>> pool, but how many pools they want and what they will use them for:
>>> something friendlier than the current "osd pool create" command can
>>> handle.
>>> 
>>> Having mons do any kind of guessing on a "pool at a time" basis
>>> without visibility of the user's overall intent is pretty dangerous,
>>> because users can never fix a bad automatic choice (can't decrease
>>> pg_num).
>>> 
>>> John
>>> 
>>>> Thanks!
>>>> 
>>>> --
>>>> Cheers
>>>> 
>>>> ––––––
>>>> Sébastien Han
>>>> Principal Storage Architect
>>>> 
>>>> "Always give 100%. Unless you're giving blood."
>>>> 
>>>> Mail: seb@xxxxxxxxxx
>>>> Address: 11 bis, rue Roquépine - 75008 Paris
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>> 
>> 
>> --
>> Cheers
>> 
>> ––––––
>> Sébastien Han
>> Principal Storage Architect
>> 
>> "Always give 100%. Unless you're giving blood."
>> 
>> Mail: seb@xxxxxxxxxx
>> Address: 11 bis, rue Roquépine - 75008 Paris
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux