Re: Increasing the number of PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2012/3/27 David McBride <dwm@xxxxxxxxxxxx>:
> Does the act of adding OSDs itself result in the PG count being
> increased?
>
> This seems to be the behaviour I've just observed on my testing cluster
> (running 0.44) using an incantation like:
>
>> for i in `seq 14 27`; do
>> ceph osd create $i
>> ceph osd crush add $i osd.$i 1.0 host=$hostname rack=$rack pool=default
>> ceph-osd -i $i --mkfs --mkkey
>> ceph -i /etc/ceph/keyring.osd.$i auth add osd.$i osd "allow *" mon "allow rwx"
>> done
>
> The number of PGs seems to have roughly doubled, and half of my
> OSDs now assert in OSD::get_or_create_pg() with "FAILED assert(role == 0)".
>
> This seems to be a similar failure-mode for others who report issues after
> PG-splitting; as in those cases, `ceph -w` also reports errors of the form:
>
>  [ERR] mkpg 1.1p23 up [5,12] != acting [12]
>
> Have I done something wrong?  Is there some alternate sequence of steps
> that avoids / suppresses PG splitting?

As Sage said, the operations you mentioned should not have changed the
number of PGs.

However, if you ever changed the number of PGs on a live system (on a
non-empty pool), your cluster may be in some weird corrupt state, and
that could cause all the symptoms you are seeing.

As far as I know, work on the PG splitting/joining feature is
currently suspended in favor of the leveldb-based key-value store and
improving commit latency. Hopefully Sam can get back to PG splits in
the near future, I see regular demand for it.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux