Re: Ceph + VMWare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday, October 18, 2016, Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> wrote:

Hi Alex,

Just to know, what kind of backstore are you using whithin Storcium ? vdisk_fileio or vdisk_blockio ?

I see your agents can handle both : http://www.spinics.net/lists/ceph-users/msg27817.html

Hi Frédéric,

We use all of them, and NFS as well, which has been performing quite well.  Vdisk_fileio is a bit dangerous in write cache mode.  Also, for some reason, object size of 16MB for RBD does better with VMWare.

Storcium gives you a choice for each LUN.  The challenge has been figuring out optimal workloads under highly varied use cases.  I see better results with NVMe journals and write combining HBAs, e.g. Areca.

Regards,
Alex

Regards,

Frédéric.


Le 06/10/2016 à 16:01, Alex Gorbachev a écrit :
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry <pmcgarry@xxxxxxxxxx> wrote:
Hey guys,

Starting to buckle down a bit in looking at how we can better set up
Ceph for VMWare integration, but I need a little info/help from you
folks.

If you currently are using Ceph+VMWare, or are exploring the option,
I'd like some simple info from you:

1) Company
2) Current deployment size
3) Expected deployment growth
4) Integration method (or desired method) ex: iscsi, native, etc

Just casting the net so we know who is interested and might want to
help us shape and/or test things in the future if we can make it
better. Thanks.

Hi Patrick,

We have Storcium certified with VMWare, and we use it ourselves:

Ceph Hammer latest

SCST redundant Pacemaker based delivery front ends - our agents are
published on github

EnhanceIO for read caching at delivery layer

NFS v3, and iSCSI and FC delivery

Our deployment size we use ourselves is 700 TB raw.

Challenges are as others described, but HA and multi host access works
fine courtesy of SCST.  Write amplification is a challenge on spinning
disks.

Happy to share more.

Alex

--

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
--
Alex Gorbachev
Storcium

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux