On 09/20/2013 05:51 AM, Matt Thompson
wrote:
Hi Yehuda,
I did try bumping up pg_num on .rgw, .rgw.buckets,
and .rgw.buckets.index from 8 to 220 prior to writing to the
list but when I saw no difference in performance I set back to
8 (by creating new pools etc.)
Hi Matt,
You'll want to bump these back up. They may not be hurting now if
there is another bottleneck, but this could easily become the
bottleneck under load.
One thing we have since noticed is that radosgw is
validating tokens on each request; when we use ceph
authentication instead we see much more promising results from
swift-bench.
Interesting! I haven't looked into this at all. Can you describe
more about your test?
Is there a known issue w/ keystone token caching in radosgw?
It's my understanding that 10,000 tokens should be cached by
default, however this doesn't appear to be the case. I've
explicitly set rgw_keystone_token_cache_size in
/etc/ceph/ceph.conf on my radosgw node yet radosgw continues
to hit keystone on each request.
Additionally, what
does /var/lib/ceph/radosgw/ceph-radosgw.gateway get used for?
I see the docs mention that it needs to be created, yet it
remains unpopulated on my nodes and doing a quick scan of ceph
code I see no reference to that being used anywhere (thought I
may be missing something).
Thanks again for the help!
Other things:
What version of swift-bench are you using? The newer versions lets
you write into multiple containers which may be worth trying to make
sure you aren't getting hung up on container indexes. Another thing
to mention: 0.67.3 has a bug in it that dramatically slows down
performance which is fixed in wip-6286. With small objects
especially you may see a dramatic result.
Otherwise, we are also investigating the effect of our OSD directory
splitting behaviour on RGW performance. With low PG counts and high
Object counts, this may be causing performance issues as well.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com