Re: Increasing the number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2012-03-26 at 16:15 -0700, Tommi Virtanen wrote:

> >  Can anyone clarify if this is still the case as of v0.43 and/or v0.44,
> > or is it safe to increase pg_num on a pool with live data?
> 
> Sadly, that is still the current state of affairs.

Does the act of adding OSDs itself result in the PG count being
increased?  

This seems to be the behaviour I've just observed on my testing cluster
(running 0.44) using an incantation like:

> for i in `seq 14 27`; do
> ceph osd create $i
> ceph osd crush add $i osd.$i 1.0 host=$hostname rack=$rack pool=default
> ceph-osd -i $i --mkfs --mkkey
> ceph -i /etc/ceph/keyring.osd.$i auth add osd.$i osd "allow *" mon "allow rwx"
> done

The number of PGs seems to have roughly doubled, and half of my
OSDs now assert in OSD::get_or_create_pg() with "FAILED assert(role == 0)".

This seems to be a similar failure-mode for others who report issues after 
PG-splitting; as in those cases, `ceph -w` also reports errors of the form:

  [ERR] mkpg 1.1p23 up [5,12] != acting [12]

Have I done something wrong?  Is there some alternate sequence of steps
that avoids / suppresses PG splitting?

Cheers,
David
-- 
David McBride <dwm@xxxxxxxxxxxx>
Department of Computing, Imperial College, London

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux