On Thu, Sep 17, 2015 at 6:01 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: > On Thu, Sep 17, 2015 at 7:55 AM, Corin Langosch > <corin.langosch@xxxxxxxxxxx> wrote: >> Hi Greg, >> >> Am 17.09.2015 um 16:42 schrieb Gregory Farnum: >>> Briefly, if you do a lot of small direct IOs (for instance, a database >>> journal) then striping lets you send each sequential write to a >>> separate object. This means they don't pile up behind each other >>> grabbing write locks and can complete in parallel. Striping them >>> instead of just having small block-sized objects means the objects are >>> still of a reasonable size for RADOS. >>> >> >> Sounds good - why not enabled it always/ by default? Is the only drawback >> that there's no support by kernel rbd? What's the recommended stripe size >> for "normal" qemu workloads? 64k? > > If you're doing large streaming writes then having to split them up > across multiple objects is slower. It's just a knob you can twirl > depending on the workload of the machine using this disk. > >> >>> I *think* that's just because the features are only filled in if >>> they're in use (the kernel doesn't/didn't support striping, despite >>> supporting other V2 image features) and required to understand the >>> image, but maybe I'm misunderstanding you or forgetting how the RBD >>> team set things up. >> >> That doesn't seem to be the case. When I use librbd direcly (for example >> using ceph-ruby) the feature is immediately visible, just as all other >> features. > > Dunno then, Josh or Jason maybe? That's just an artifact of how rbd cli tool works: it clears striping feature bit unless you specify a non-default striping pattern with --stripe-unit or --stripe-count. (The default striping pattern for v2 images is the same as for v1 images, and that is stripe_unit equal to object size and stripe_count equal to 1.) Setting STRIPINGV2 and leaving stripe_unit and stripe_count intact doesn't change anything, so the bit is cleared to keep older clients in play. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com