Re: RBD Cache and rbd-nbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Jason,

thanks for your response.


Am 10.05.2018 um 21:18 schrieb Jason Dillaman:


      
If i configure caches like described at
http://docs.ceph.com/docs/luminous/rbd/rbd-config-ref/, are there dedicated
caches per rbd-nbd/krbd device or is there a only a single cache area.
The librbd cache is per device, but if you aren't performing direct
IOs to the device, you would also have the unified Linux pagecache on
top of all the devices.
XENServer directly utilizes nbd devices which are connected in my understanding by blkback (dom-0) and blkfront (dom-U) to the virtual machines.
In my understanding pagecache is only part of the game if i use data on mounted filesystems (VFS usage).
Therefore it would be a good thing to use rbd cache for rbd-nbd (/dev/nbdX).
How can i identify the rbd cache with the tools provided by the operating
system?
Identify how? You can enable the admin sockets and use "ceph
--admin-deamon config show" to display the in-use settings.

Ah ok, i discovered that i can gather configuration settings by executing:
(xen_test is the identity of the xen rbd_nbd user)

ceph --id xen_test --admin-daemon /var/run/ceph/ceph-client.xen_test.asok config show | less -p rbd_cache

Sorry, my question was a bit unprecice: I was searching for usage statistics of the rbd cache.
Is there also a possibility to gather rbd_cache usage statistics as a source of verification for optimizing the cache settings?

Due to the fact that a rbd cache is created for every device, i assume that the rbd cache simply part of the rbd-nbd process memory.


Can you provide some hints how to about adequate cache settings for a write
intensive environment (70% write, 30% read)?
Is it a good idea to specify a huge rbd cache of 1 GB with a max dirty age
of 10 seconds?
The librbd cache is really only useful for sequential read-ahead and
for small writes (assuming writeback is enabled). Assuming you aren't
using direct IO, I'd suspect your best performance would be to disable
the librbd cache and rely on the Linux pagecache to work its magic.
As described, xenserver directly utilizes the nbd devices.

Our typical workload is originated over 70 percent in database write operations in the virtual machines.
Therefore collecting write operations with rbd cache and writing them in chunks to ceph might be a good thing.
A higher limit for "rbd cache max dirty" might be a adequate here.
At the other side our read workload typically reads huge files in sequential manner.

Therefore it might be useful to do start with a configuration like that:

rbd cache size = 64MB
rbd cache max dirty = 48MB
rbd cache target dirty = 32MB
rbd cache max dirty age = 10

What is the strategy of librbd to write data to the storage from rbd_cache if "rbd cache max dirty = 48MB" is reached?
Is there a reduction of io operations (merging of ios) compared to the granularity of writes of my virtual machines?

Additionally, i would do no non-default settings for readahead on nbd level to have the possibility to configure this at operating system level of the vms.

Our operating systems in the virtual machines use currently a readahead of 256 (256*512 = 128KB).
From my point of view it would be a good thing for sequential reads in big files to increase readahead to a higher value.
We haven't changed the default rbd object size of 4MB - nevertheless it might be a good thing to increase
the readahead to 1024 (=512KB) to decrease read requests by factor of 4 for sequential reads.

What do you think about this?

Regards
Marc

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux