Hi Alex, Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ? I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html Regards, Frédéric. Le 06/10/2016 à 16:01, Alex Gorbachev a
écrit :
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry@xxxxxxxxxx> wrote:Hey guys, Starting to buckle down a bit in looking at how we can better set up Ceph for VMWare integration, but I need a little info/help from you folks. If you currently are using Ceph+VMWare, or are exploring the option, I'd like some simple info from you: 1) Company 2) Current deployment size 3) Expected deployment growth 4) Integration method (or desired method) ex: iscsi, native, etc Just casting the net so we know who is interested and might want to help us shape and/or test things in the future if we can make it better. Thanks.Hi Patrick, We have Storcium certified with VMWare, and we use it ourselves: Ceph Hammer latest SCST redundant Pacemaker based delivery front ends - our agents are published on github EnhanceIO for read caching at delivery layer NFS v3, and iSCSI and FC delivery Our deployment size we use ourselves is 700 TB raw. Challenges are as others described, but HA and multi host access works fine courtesy of SCST. Write amplification is a challenge on spinning disks. Happy to share more. Alex-- Best Regards, Patrick McGarry Director Ceph Community || Red Hat http://ceph.com || http://community.redhat.com @scuttlemonkey || @ceph _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com-- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com