Thank you guys , this answers my query
Cheers
Vickey
On Thu, Aug 13, 2015 at 8:02 PM, Bill Sanders <billysanders@xxxxxxxxx> wrote:
I think you're looking for this.
http://ceph.com/docs/master/man/8/rbd/#cmdoption-rbd--order
It's used when you create the RBD images. 1MB is order=20, 512 is order=19.Thanks,Bill SandersOn Thu, Aug 13, 2015 at 1:31 AM, Vickey Singh <vickey.singh22693@xxxxxxxxx> wrote:Thanks Nick for your suggestion.Can you also tell how i can reduce RBD block size to 512K or 1M , do i need to put something in clients ceph.conf ( what parameter i need to set )Thanks once again- VickeyOn Wed, Aug 12, 2015 at 4:49 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Dominik Zalewski
> Sent: 12 August 2015 14:40
> To: ceph-users@xxxxxxxx
> Subject: Cache tier best practices
>
> Hi,
>
> I would like to hear from people who use cache tier in Ceph about best
> practices and things I should avoid.
>
> I remember hearing that it wasn't that stable back then. Has it changed in
> Hammer release?
It's not so much the stability, but the performance. If your working set will sit mostly in the cache tier and won't tend to change then you might be alright. Otherwise you will find that performance is very poor.
Only tip I can really give is that I have found dropping the RBD block size down to 512kb-1MB helps quite a bit as it makes the cache more effective and also minimises the amount of data transferred on each promotion/flush.
>
> Any tips and tricks are much appreciated!
>
> Thanks
>
> Dominik
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com