Re: ceph -w output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 14, 2011 at 00:36, Jens Rehpöhler <jens.rehpoehler@xxxxxxxx> wrote:
> Attached you will find the output you asked for. Is there any limitation
> on the amount of pools ? We create
> pools for every customer and store their VM images in that pools. So we
> will create a lot of pools over time.

Each pool gets its own set of PGs (Placement Groups). An OSD that
manages too many PGs will use a lot of RAM. What is "too many" is
debatable, and really up to benchmarks, but considering we recommend
about 100 PGs/OSD as a starting point, you probably don't want to go
two orders of magnitude above that.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux