Re: ceph -w output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for the delay.  It looks like you hit a corner case in our crush
implementation.  The short version is that this bug got fixed last
week in commit 14f8f00e579083db542568a60cd23d50055c92a3.

The long version is that you have osd.3 and osd.4, but not osd.0,
osd.1, or osd.2.  The pgs stuck in creating are the ones mapped
specifically to osds 0, 1, and 2.  A pg ending in p# (like pg1.0p0) is
supposed to map to osd.0 if possible.  With the above patch, those pgs
should remap to available osds.

-Sam

On Thu, Dec 15, 2011 at 1:45 AM, Jens Rehpöhler
<jens.rehpoehler@xxxxxxxx> wrote:
> Am 14.12.2011 17:43, schrieb Tommi Virtanen:
>> On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@xxxxxxxx> wrote:
>>> Attached you will find the output you asked for. Is there any limitation
>>> on the amount of pools ? We create
>>> pools for every customer and store their VM images in that pools. So we
>>> will create a lot of pools over time.
>> Each pool gets its own set of PGs (Placement Groups). An OSD that
>> manages too many PGs will use a lot of RAM. What is "too many" is
>> debatable, and really up to benchmarks, but considering we recommend
>> about 100 PGs/OSD as a starting point, you probably don't want to go
>> two orders of magnitude above that.
> Ok .... that will serve our needs. Remains only the "creating" question.
>
> Any answers to that ?
>
> Thanks a lot !
>
> Jens
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux