Re: Local SSD cache for ceph on each compute node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks.

Reassuring but I could do with something today :)

-----Original Message-----
From: Jason Dillaman [mailto:dillaman@xxxxxxxxxx] 
Sent: 16 March 2016 01:25
To: Daniel Niasoff <daniel@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Local SSD cache for ceph on each compute node.

The good news is such a feature is in the early stage of design [1].  Hopefully this is a feature that will land in the Kraken release timeframe.  

[1] http://tracker.ceph.com/projects/ceph/wiki/Rbd_-_ordered_crash-consistent_write-back_caching_extension

-- 

Jason Dillaman 


----- Original Message -----
> From: "Daniel Niasoff" <daniel@xxxxxxxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Tuesday, March 15, 2016 8:47:04 PM
> Subject:  Local SSD cache for ceph on each compute node.
> 
> Hi,
> 
> Let me start. Ceph is amazing, no it really is!
> 
> But a hypervisor reading and writing all its data off the network off 
> the network will add some latency to read and writes.
> 
> So the hypervisor could do with a local cache, possible SSD or even NVMe.
> 
> Spent a while looking into this but it seems really strange that few 
> people see the value of this.
> 
> Basically the cache would be used in two ways
> 
> a) cache hot data
> b) writeback cache for ceph writes
> 
> There is the RBD cache but that isn't disk based and on a hypervisor 
> memory is at a premium.
> 
> A simple solution would be to put a journal on each compute node and 
> get each hypervisor to use its own journal. Would this work?
> 
> Something like this
> http://sebastien-han.fr/images/ceph-cache-pool-compute-design.png
> 
> Can this be achieved?
> 
> A better explanation of what I am trying to achieve is here
> 
> http://opennebula.org/cached-ssd-storage-infrastructure-for-vms/
> 
> This talk if it was voted in looks interesting -
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Present
> ation/6827
> 
> Can anyone help?
> 
> Thanks
> 
> Daniel
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux