Re: best pool usage for vmware backing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, you obviously don't need multiple pools for load balancing.
--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 6:46 PM Philip Brown <pbrown@xxxxxxxxxx> wrote:
Hmm...

I reread through the docs in and around
https://docs.ceph.com/docs/master/rbd/iscsi-targets/

and it mentions about iscsi multipathing through multiple CEPH storage gateways... but it doesnt seem to say anything about needing multiple POOLS.

when you wrote,
" 1 pool per storage class (e.g., SSD and HDD), at least one RBD per
gateway per pool for load balancing (failover-only load balancing
policy)."


you seemed to imply that we needed to set up multiple pools to get "load balancing", but it is lacking some information.

Let me see if I can infer the missing details to your original post.

Perhaps you are suggesting that we use storage pools, to emulate the old dual-controller hardware raid array best practice of assigning half the LUNs to one controller, and half to the other, for "load balancing".
Except in this case, we would tell half our vms to use (pool1) and half our vms to use (pool2). And then somehow (?) assign preference for pool1 to use ceph gateway1, and pool2 to prefer using ceph gateway2

is that what you are saying?

it would make sense...
Except that I dont see anything in the docs that says how to make such an association between pools and a theoretical preferred iscsi gateway.






----- Original Message -----
From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Thursday, December 5, 2019 8:16:09 AM
Subject: Re: best pool usage for vmware backing

ceph-iscsi doesn't support round-robin multi-pathing; so you need at least
one LUN per gateway to utilize all of them.

Please see https://docs.ceph.com for basics about RBDs and pools.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 5:04 PM Philip Brown <pbrown@xxxxxxxxxx> wrote:

> Interesting.
> I thought when you defined a pool, and then defined an RBD within that
> pool.. that any auto-replication stayed within that pool?
> So what kind of "load balancing" do you mean?
> I'm confused.
>
>
>
>
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
> To: "Philip Brown" <pbrown@xxxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Sent: Wednesday, December 4, 2019 12:05:47 PM
> Subject: Re: best pool usage for vmware backing
>
> 1 pool per storage class (e.g., SSD and HDD), at least one RBD per
> gateway per pool for load balancing (failover-only load balancing
> policy).
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Wed, Dec 4, 2019 at 8:51 PM Philip Brown <pbrown@xxxxxxxxxx> wrote:
> >
> > Lets say that you had roughly 60 OSDs that you wanted to use to provide
> storage for VMware, through RBDs served through iscsi.
> >
> > Target VM types are completely mixed. Web front ends, app tier.. a few
> databases.. and the kitchen sink.
> > Estimated number of VMs: 50-200
> > b
> >
> > How would people recommend the storage be divided up?
> >
> > The big questions are:
> >
> > * 1 pool, or multiple,and why
> >
> > * many RBDs, few RBDs, or single  RBD per pool? why?
> >
> >
> >
> >
> >
> >
> >
> > --
> > Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> > 5 Peters Canyon Rd Suite 250
> > Irvine CA 92606
> > Office 714.918.1310| Fax 714.918.1325
> > pbrown@xxxxxxxxxx| www.medata.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux