Re: Increasing time to save RGW objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le 09/02/2016 17:07, Kris Jurka a écrit :
>
>
> On 2/8/2016 9:16 AM, Gregory Farnum wrote:
>> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka <jurka@xxxxxxxxxx> wrote:
>>>
>>> I've been testing the performance of ceph by storing objects through
>>> RGW.
>>> This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW
>>> instances.  Initially the storage time was holding reasonably
>>> steady, but it
>>> has started to rise recently as shown in the attached chart.
>>>
>>
>> It's probably a combination of your bucket indices getting larger and
>> your PGs getting split into subfolders on the OSDs. If you keep
>> running tests and things get slower it's the first; if they speed
>> partway back up again it's the latter.
>
> Indeed, after running for another day, performance has leveled back
> out, as attached.  So tuning something like filestore_split_multiple
> would have moved around the time of this performance spike, but is
> there a way to eliminate it?  Some way of saying, start with N levels
> of directory structure because I'm going to have a ton of objects?  If
> this test continues, it's just going to hit another, worse spike later
> when it needs to split again.

Actually if I understand correctly how PG splitting works the next spike
should be <n> times smaller and spread over <n> times the period (where
<n> is the number of subdirectories created during each split which
seems to be 15 according to OSDs' directory layout).

That said, the problem that could happen is that by the time you reach
the next split you might have reached <n> times the object creation
speed you have currently and get the very same spike.

Best regards,

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux