Re: RBD+LVM -> iSCSI -> VMWare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have over 150 VMs running in vmware. We also have 2PB of Ceph for filesystem. With our vmware storage aging and not providing the IOPs we need, we are considering and hoping to use ceph. Ultimately, yes we will move to KVM, but in the short term, we probably need to stay on VMware. 

On Dec 9, 2017 6:26 PM, "Donny Davis" <donny@xxxxxxxxxxxxxx> wrote:
Just curious but why not just use a hypervisor with rbd support? Are there VMware specific features you are reliant on? 

On Fri, Dec 8, 2017 at 4:08 PM Brady Deetz <bdeetz@xxxxxxxxx> wrote:
I'm testing using RBD as VMWare datastores. I'm currently testing with krbd+LVM on a tgt target hosted on a hypervisor.

My Ceph cluster is HDD backed.

In order to help with write latency, I added an SSD drive to my hypervisor and made it a writeback cache for the rbd via LVM. So far I've managed to smooth out my 4k write latency and have some pleasing results.

Architecturally, my current plan is to deploy an iSCSI gateway on each hypervisor hosting that hypervisor's own datastore.

Does anybody have any experience with this kind of configuration, especially with regard to LVM writeback caching combined with RBD?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux