On 2/8/2016 9:16 AM, Gregory Farnum wrote:
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka <jurka@xxxxxxxxxx> wrote:I've been testing the performance of ceph by storing objects through RGW. This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW instances. Initially the storage time was holding reasonably steady, but it has started to rise recently as shown in the attached chart.It's probably a combination of your bucket indices getting larger and your PGs getting split into subfolders on the OSDs. If you keep running tests and things get slower it's the first; if they speed partway back up again it's the latter.
Indeed, after running for another day, performance has leveled back out, as attached. So tuning something like filestore_split_multiple would have moved around the time of this performance spike, but is there a way to eliminate it? Some way of saying, start with N levels of directory structure because I'm going to have a ton of objects? If this test continues, it's just going to hit another, worse spike later when it needs to split again.
Other things to check: * you can look at your OSD stores and how the object files are divvied up.
Yes, checking the directory structure and times on the OSDs does show that things have been split recently.
Kris Jurka
Attachment:
StorageTimeV2.png
Description: PNG image
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com