Turning on rbd cache safely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

After examining our running OSD configuration through an admin socket 
we suddenly noticed,  that "rbd_cache" parameter is set to "false". Till 
that moment, I suppose, that rbd cache is entirly client-side feature, 
and it is enabled with "cache=writeback" parameter in libvirt VM xml
definition. Now I think, that I'm wrong.

I plan to do this steps:

1. Add group of rbd cache parameters to master ceph.conf:
 [client]
   rbd cache = true
   rbd cache writethrough until flush = true
   rbd cache size = <256-1024Mb>
   rbd cache max dirty = <0.5-0.75*rbd_cache_size>

2. Push config to all nodes:
   ceph-deploy push config <nodeNN>

3. Set "noout" to cluster and restart all OSDs one-by-one, than change 
back "noout".

4. Check config od running OSDs through admin socket

So, is that right and safe way to turn on rbd cache on a running prodction
cluster?

Do I need to specify the same rbd cache parameters on client hosts, 
running VMs with librbd backend?

PS: We use Firefly 0.80.7 on Debian Wheezy x86_64, 5 nodes, 58 OSDs,
journals on SSD.

Thanks for your answers!



Megov Igor
CIO Yuterra
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux