Re: RBD Cache and rbd-nbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 10, 2018 at 12:03 PM, Marc Schöchlin <ms@xxxxxxxxxx> wrote:
> Hello list,
>
> i map ~30 rbds  per xenserver host by using rbd-nbd to run virtual machines
> on these devices.
>
> I have the following questions:
>
> Is it possible to use rbd cache for rbd-nbd? I assume that this is true, but
> the documentation does not make a clear statement about this.
> (http://docs.ceph.com/docs/luminous/rbd/rbd-config-ref/(

It's on by default since it's a librbd client and that's the default setting.

> If i configure caches like described at
> http://docs.ceph.com/docs/luminous/rbd/rbd-config-ref/, are there dedicated
> caches per rbd-nbd/krbd device or is there a only a single cache area.

The librbd cache is per device, but if you aren't performing direct
IOs to the device, you would also have the unified Linux pagecache on
top of all the devices.

> How can i identify the rbd cache with the tools provided by the operating
> system?

Identify how? You can enable the admin sockets and use "ceph
--admin-deamon config show" to display the in-use settings.

> Can you provide some hints how to about adequate cache settings for a write
> intensive environment (70% write, 30% read)?
> Is it a good idea to specify a huge rbd cache of 1 GB with a max dirty age
> of 10 seconds?

The librbd cache is really only useful for sequential read-ahead and
for small writes (assuming writeback is enabled). Assuming you aren't
using direct IO, I'd suspect your best performance would be to disable
the librbd cache and rely on the Linux pagecache to work its magic.

>
> Regards
> Marc
>
> Our system:
>
> Luminous/12.2.5
> Ubuntu 16.04
> 5 OSD Nodes (24*8 TB HDD OSDs, 48*1TB SSD OSDS, Bluestore, 6Gb Cache per
> OSD)
> Size per OSD, 192GB RAM, 56 HT CPUs)
> 3 Mons (64 GB RAM, 200GB SSD, 4 visible CPUs)
> 2 * 10 GBIT, SFP+, bonded xmit_hash_policy layer3+4
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux