On 06/16/2016 03:54 AM, Mark Nelson wrote:
Hi,
larger stripe size (to an extent) will generally improve large
sequential read and write performance.
Oops, I should have had my coffee. I missed a sentence here. larger
strip size will generally improve large sequential read and write
performance. Smaller stripe size can provide some of the advantages you
mention below, but there's overhead though. Ok fixed, now back to find
coffee. :)
There's overhead though. It
means more objects which can slow things down at the filestore level
when PG splits occur and also potentially means more inodes fall out of
cache, longer syncfs, etc. On the other hand, if using cache tiering,
smaller objects means less data to promote which can be a big win for
small IO.
Basically the answer is that there are pluses and minuses, and the exact
behavior will depend on your kernel configuration, hardware, and use
case. I think 4MB has been a fairly good default thus far (might change
with bluestore), but tuning for a specific use case may mean a smaller
or larger size is better.
Mark
On 06/16/2016 03:20 AM, Lazuardi Nasution wrote:
Hi,
I'm looking for some pros cons related to RBD stripe/chunk size
indicated by image order number. Default is 4MB (order 22), but
OpenStack use 8MB (order 23) as default. What if we use smaller size
(lower order number), isn't it more chance that image objects is
spreaded through OSDs and cached on OSD nodes RAM? What if we use bigger
size (higher order number), isn't it more chance that image objects is
cached as contiguos blocks and may be have read ahead advantage? Please
give your opnion and reason.
Best regards,
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com