Re: Increase number of PG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dnia 21 lip 2012 o godz. 20:08 Yehuda Sadeh <yehuda@xxxxxxxxxxx> napisał(a):

> On Sat, Jul 21, 2012 at 10:13 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
>> On Fri, Jul 20, 2012 at 1:15 PM, Tommi Virtanen <tv@xxxxxxxxxxx (mailto:tv@xxxxxxxxxxx)> wrote:
>>> On Fri, Jul 20, 2012 at 8:31 AM, Sławomir Skowron <szibis@xxxxxxxxx (mailto:szibis@xxxxxxxxx)> wrote:
>>>> I know that this feature is disabled, are you planning to enable this
>>>> in near future ??
>>>>
>>>
>>>
>>> PG splitting/joining is the next major project for the OSD. It won't
>>> be backported to argonaut, but it will be in the next stable release,
>>> and will probably appear in our regular development release in 2-3
>>> months.

Ok, so i am waiting for this feature, but in a meantime i can move my
objects to a new pool with more PG's manualy created, and use it as a
bucket pool in radosgw ?? How can i tell radosgw to use this pool, or
i can't ??
At this moment my pool .rgw.buckets have default 8 PG's, and it is
small amount, too small.

>>>
>>>> I have many of drives, and my S3 instalation use only few of them in
>>>> one time, and i need to improve that.
>>>>
>>>> When i use it as rbd it use all of them.
>>>
>>> Radosgw normally stores most of the data for a single S3-level object
>>> in a single RADOS object, where as RBD stripes disk images across
>>> objects by default in 4MB chunks. If you have only a few S3 objects,
>>> you will see an uneven distribution. It will get more balanced as you
>>> upload more images. Also, if you use multi-part uploads, each part
>>> goes into a separate RADOS object, so that'll spread the load more
>>> evenly.
>>>
>>
>> RGW only does this for small objects — I believe its default chunk size is also 4MB.
>

Yes i have a lots of small objects (500k) from bajts to 2-3MB in
.rgw.buckets pool. They are not even hit multipart.

> Actually no. While the infrastructure is there, currently a regular
> object upload at the moment is not going to create more than 2 rados
> objects. The head object, which ad is capped at 512k and the tail, which
> will contain the rest. As Tommi specified, multipart upload chunks
> depend on the actual upload.
> There's actually no real reason anymore for not striping, and it's
> easy enough to implement, so it might be something that we're going to
> do soon.
>

This can be useful. But now in my case,  objects are too small, and if
i think right, my only option is to have more PG's to balance new
objects in more drives.

My workload looks like this:

- Max 20% are PUTs, with 99% of objects smaller then 4MB,
- 80% are GETs, and S3 metadata operations.

When workload hit worse scenario (PUT, and then only one GET), then
every GET miss the cache in NGINX, and it's goes from only few drives,
and it's hurts ;)

> Yehuda
>
> Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux