Re: Local SSD cache for ceph on each compute node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am using openstack so need this to be fully automated and apply to all my VMs.

If I could do what you mention at the hypervisor level that would me much easier.

The options that you mention I guess are for very specific use cases and need to be configured on a per vm basis whilst I am looking for a general "ceph on steroids" approach for all my VMs without any maintenance.

Thanks again :)

-----Original Message-----
From: Jason Dillaman [mailto:dillaman@xxxxxxxxxx] 
Sent: 16 March 2016 01:42
To: Daniel Niasoff <daniel@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Local SSD cache for ceph on each compute node.

Indeed, well understood.

As a shorter term workaround, if you have control over the VMs, you could always just slice out an LVM volume from local SSD/NVMe and pass it through to the guest.  Within the guest, use dm-cache (or similar) to add a cache front-end to your RBD volume.  Others have also reported improvements by using the QEMU x-data-plane option and RAIDing several RBD images together within the VM.

-- 

Jason Dillaman 


----- Original Message -----
> From: "Daniel Niasoff" <daniel@xxxxxxxxxxxxxx>
> To: "Jason Dillaman" <dillaman@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Sent: Tuesday, March 15, 2016 9:32:50 PM
> Subject: RE:  Local SSD cache for ceph on each compute node.
> 
> Thanks.
> 
> Reassuring but I could do with something today :)
> 
> -----Original Message-----
> From: Jason Dillaman [mailto:dillaman@xxxxxxxxxx]
> Sent: 16 March 2016 01:25
> To: Daniel Niasoff <daniel@xxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Local SSD cache for ceph on each compute node.
> 
> The good news is such a feature is in the early stage of design [1].
> Hopefully this is a feature that will land in the Kraken release timeframe.
> 
> [1]
> http://tracker.ceph.com/projects/ceph/wiki/Rbd_-_ordered_crash-consist
> ent_write-back_caching_extension
> 
> --
> 
> Jason Dillaman
> 
> 
> ----- Original Message -----
> > From: "Daniel Niasoff" <daniel@xxxxxxxxxxxxxx>
> > To: ceph-users@xxxxxxxxxxxxxx
> > Sent: Tuesday, March 15, 2016 8:47:04 PM
> > Subject:  Local SSD cache for ceph on each compute node.
> > 
> > Hi,
> > 
> > Let me start. Ceph is amazing, no it really is!
> > 
> > But a hypervisor reading and writing all its data off the network 
> > off the network will add some latency to read and writes.
> > 
> > So the hypervisor could do with a local cache, possible SSD or even NVMe.
> > 
> > Spent a while looking into this but it seems really strange that few 
> > people see the value of this.
> > 
> > Basically the cache would be used in two ways
> > 
> > a) cache hot data
> > b) writeback cache for ceph writes
> > 
> > There is the RBD cache but that isn't disk based and on a hypervisor 
> > memory is at a premium.
> > 
> > A simple solution would be to put a journal on each compute node and 
> > get each hypervisor to use its own journal. Would this work?
> > 
> > Something like this
> > http://sebastien-han.fr/images/ceph-cache-pool-compute-design.png
> > 
> > Can this be achieved?
> > 
> > A better explanation of what I am trying to achieve is here
> > 
> > http://opennebula.org/cached-ssd-storage-infrastructure-for-vms/
> > 
> > This talk if it was voted in looks interesting - 
> > https://www.openstack.org/summit/austin-2016/vote-for-speakers/Prese
> > nt
> > ation/6827
> > 
> > Can anyone help?
> > 
> > Thanks
> > 
> > Daniel
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux