Hi Wido and Sage, I am also interested in adding persistent caching in RBD. Is there any plan scheduled on this topic? I'd like to involve in and help. Best regards, Chendi -----Original Message----- From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Sage Weil Sent: Wednesday, July 03, 2013 4:29 AM To: Wido den Hollander Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: librbd read caching Hi Wido! On Tue, 2 Jul 2013, Wido den Hollander wrote: > Something in the back of my mind keeps saying that there were plans to > implement read caching in librbd, but I haven't been able to find any > reference about that. > > In the tracker however I wasn't able to find anything, so it has to be me. > > Any plans or ideas on this? A simple idea was that librbd could use a > mmap where it caches RADOS objects and purges them on a write. > > This cache could then live on an SSD on a hypervisor where it could > speed up read operations for Virtual Machines in OpenStack or CloudStack envs. > Something like FS-Cache for librbd. This has come up in a few different conversations over the last few weeks. This would be a good thing to discuss in a blueprint for emperor. In my mind there are two different use case classes that we want to cover: - persistent caching (e.g., a local SSD) - shared per-host cache (e.g., many qemu instances with a shared cache of the clone base images) My hope is that we can build a simple caching interface that plugs into ObjectCacher (the per-process in-memory cache) with multiple implementations (shared memory, files on disk) to kill both birds with one stone. The trick will be around cache invalidation. Even if we somewhat punt on that, though, a simple approach would be usable for the read-only gold image cache (say, in memory) case. sage _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com