RE: Increase number of pg in running system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 6 Feb 2013, Chen, Xiaoxi wrote:
> But can we change the pg_num of a pool when the pool contains data? If 
> yes, how to do this?

This functionality is merged, but still a bit experimental.  The 
incantation is

 ceph osd pool set <poolname> pg_num <numpgs> --allow-experimental-feature

Please test, but be careful on clusters with real data.

sage


> 
> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Sage Weil
> Sent: 2013?2?6? 9:50
> To: Mandell Degerness
> Cc: ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re: Increase number of pg in running system
> 
> On Tue, 5 Feb 2013, Mandell Degerness wrote:
> > I would like very much to specify pg_num and pgp_num for the default 
> > pools, but they are defaulting to 64 (no OSDs are defined in the 
> > config file).  I have tried using the options indicated by Artem, but 
> > they didn't seem to have any effect on the data and rbd pools which 
> > are created by default.  Is there something I am missing?
> 
> Ah, I see.  Specifying this is awkward.  In [mon] or [global],
> 
>  osd pg bits = N
>  osd pgp bits = N
> 
> where N is the the number of bits to shift 1 to the left.  So for 1024 PGs, you'd do 10.  (What it's actually doing is MIN(1, num_osds) << N.  
> The default N is 6, so you're probaby seeing 64 PGs per pool by default.)
> 
> sage
> 
> 
> > 
> > On Tue, Feb 5, 2013 at 6:40 AM, ArtemGr <artemciy@xxxxxxxxx> wrote:
> > > Martin B Nielsen <martin <at> unity3d.com> writes:
> > >> Hi,
> > >>
> > >> Looking at:
> > >> http://ceph.com/docs/master/rados/operations/pools/
> > >>
> > >> It has this description roughly in the middle:
> > >>
> > >> ---------------
> > >> Important
> > >> Increasing the number of placement groups in a pool after you 
> > >> create the pool is still an experimental feature in Bobtail (v 
> > >> 0.56). We recommend defining a reasonable number of placement 
> > >> groups and maintaining that number until Ceph?s placement group 
> > >> splitting and merging functionality matures.
> > >> ---------------
> > >>
> > >> However, I cannot find any references how to do this?
> > >>
> > >> I'm asking since we have a test system with 10TB data with only the 
> > >> default 8 PG's created.
> > >
> > > Here's how I do it in ceph.conf:
> > >
> > > [osd]
> > >   ; Increase groups number in order to decrease scrub size
> > >   osd pool default pg num = 64
> > >   osd pool default pgp num = 64
> > >
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe 
> > > ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx 
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux