Re: design guidance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Early usage will be CephFS, exported via NFS and mounted on ESXi 5.5
> > and
> > 6.0 hosts(migrating from a VMWare environment), later to transition to
> > qemu/kvm/libvirt using native RBD mapping. I tested iscsi using lio
> > and saw much worse performance with the first cluster, so it seems
> > this may be the better way, but I'm open to other suggestions.
> >
> I've never seen any ultimate solution to providing HA iSCSI on top of Ceph,
> though other people here have made significant efforts.

In our tests our best results were with SCST - also because it provided proper ALUA support at the time.  I ended up developing my own pacemaker cluster resources to manage the SCST orchestration and ALUA failover.  In our model we have  a pacemaker cluster in front being an RBD client presenting LUNs/NFS out to VMware (NFS), Solaris and Hyper-V (iSCSI).  We are using CephFS over NFS but performance has been poor, even using it just for VMware templates.  We are on an earlier version of Jewel so its possibly some later versions may improve CephFS for that but I have not had time to test it.

We have been running a small production/POC for over 18 months on that setup, and gone live into a much larger setup in the last 6 months based on that model.  It's not without its issues, but most of that is a lack of test resources to be able to shake out some of the client compatibility and failover shortfalls we have.

Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux