Just a note that we use SUSE for our Ceph/Vmware system
this is the general ceph docs for vmware/iscsi
this is the SUSE docs
they differ
I'll tell you what we've done and have been running for 3 years
We had a bunch of pools - but condensed them down to just one
This allows for more PGs -
within the pool we have 4 RBDs (can have more or less) mapped to 4 vmware datastores
We have two iscsi gateways - will add a third
but for now we have 3 vmware hosts
two 40 GB switches, 2 iscsi gateways, 4 luns = 16 paths (each lun has 4 paths)
We can take a switch offline or a gateway - everything is multi-pathed and so stays up
We have about 200 VMs from 12 GB to 20 TB each
>>> Philip Brown <pbrown@xxxxxxxxxx> 12/5/2019 11:24 AM >>> Okay then.. how DO you load balance across ceph iscsi gateways? You said "check the docs", but as far as I can tell, that info isnt in there. Or at least not in the logical place, such as the iscsi gateway setup pages, under https://docs.ceph.com/docs/master/rbd/iscsi-targets/ ----- Original Message ----- From: "Paul Emmerich" <paul.emmerich@xxxxxxxx> To: "Philip Brown" <pbrown@xxxxxxxxxx> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Sent: Thursday, December 5, 2019 11:08:23 AM Subject: Re: best pool usage for vmware backing No, you obviously don't need multiple pools for load balancing. -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com