Re: Local SSD cache for ceph on each compute node.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Indeed, well understood.
>
>As a shorter term workaround, if you have control over the VMs, you could always just slice out an LVM volume from local SSD/NVMe and pass it through to the guest.  Within the guest, use dm-cache (or similar) to add a cache front-end to your RBD volume.  

If you do this you need to setup your cache as read-cache only. 
Caching writes would be bad because a hypervisor failure would result in loss of the cache which pretty much guarantees inconsistent data on the ceph volume.
Also live-migration will become problematic compared to running everything from ceph since you will also need to migrate the local-storage.

The question will be if adding more ram (== more read cache) would not be more convenient and cheaper in the end.
(considering the time required for setting up and maintaining the extra caching layer on each vm, unless you work for free ;-)
Also reads from ceph are pretty fast compared to the biggest bottleneck: (small) sync writes.
So it is debatable how much performance you would win except for some use-cases with lots of reads on very large data sets which are also very latency sensitive.

Cheers,
Robert van Leeuwen

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux