Re: best pool usage for vmware backing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



tcmu-runner iGW is seen behaving as a generic iSCSI target, seen from
vSphere, with dataflow on 1 AO and n ANO's as "hot-standby" for each
LUN.

Load-balance is therefore limited to be between images, if you use
tcmu-runner based deployment.

Each LUN will have one AO(Active-Optimized) path and n non-optimized
paths.
All data for a given LUN will, under normal operations, go through the
AO path with the rest being "hot-standby" in case one of the AO
assigned GW for the particular LUN goes down.
LUN's are afaik. pinned to a given iGW as AO, when the image is created
and choosing the iGW to pin an image on, is based on the total number
of images that's already pinned to the GW, selecting the one with the
least number of pinned images.

When you design the iGW deployment, you should also bare in mind that
#1 There is a limit on the supported number of iGW's if you use a
commercially supported deployment of ceph.
#2 There is a performance limit (iops and throughput). If you have ssd
based pool's exposed, then the bottleneck will likely be the LUN itself
on the iGW, assuming a lacp based setup.

For vSphere datastore design considerations, my genreal recommendation
is that  you make your datastores large'ish(assuming vmfs6 on >=6.5u3),
but not so large that you can't pull one out (put datastore in in
maintenance mode, in a datastore cluster), this gives flexibility for
unforseen issues and vmfs upgrades alike.
Also ensure that your SATP, PSP and recovery timeout is set in
accordance with the doc's

Regarding ceph pools, I generally make one for each device-class that
provides rbd. but this depends on your requirements for resiliance etc.
on the ceph level.

Remember that _general_ recommendations seldom fit any specific
deployment well.


Regards
Heðin Ejdesgaard
+298 77 11 10

On Thu, 2019-12-05 at 11:24 -0800, Philip Brown wrote:
> Okay then.. how DO you load balance across ceph iscsi gateways?
> You said "check the docs", but as far as I can tell, that info isnt
> in there.
> Or at least not in the logical place, such as the iscsi gateway setup
> pages, under 
> https://docs.ceph.com/docs/master/rbd/iscsi-targets/
> 
> 
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
> To: "Philip Brown" <pbrown@xxxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Sent: Thursday, December 5, 2019 11:08:23 AM
> Subject: Re:  best pool usage for vmware backing
> 
> No, you obviously don't need multiple pools for load balancing.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux