Re: Increase number of PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 20, 2012 at 8:31 AM, Sławomir Skowron <szibis@xxxxxxxxx> wrote:
> I know that this feature is disabled, are you planning to enable this
> in near future ??

PG splitting/joining is the next major project for the OSD. It won't
be backported to argonaut, but it will be in the next stable release,
and will probably appear in our regular development release in 2-3
months.

> I have many of drives, and my S3 instalation use only few of them in
> one time, and i need to improve that.
>
> When i use it as rbd it use all of them.

Radosgw normally stores most of the data for a single S3-level object
in a single RADOS object, where as RBD stripes disk images across
objects by default in 4MB chunks. If you have only a few S3 objects,
you will see an uneven distribution. It will get more balanced as you
upload more images. Also, if you use multi-part uploads, each part
goes into a separate RADOS object, so that'll spread the load more
evenly.

Now, if your problem comes from the rgw pools having too few PGs to
begin with, the distribution will be.. lumpy.. even with more objects.
Here's another mailing list thread that talks about what you can do
about that: http://article.gmane.org/gmane.comp.file-systems.ceph.devel/8069
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux