Re: best pool usage for vmware backing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ceph-iscsi doesn't support round-robin multi-pathing; so you need at least one LUN per gateway to utilize all of them.

Please see https://docs.ceph.com for basics about RBDs and pools.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 5:04 PM Philip Brown <pbrown@xxxxxxxxxx> wrote:
Interesting.
I thought when you defined a pool, and then defined an RBD within that pool.. that any auto-replication stayed within that pool?
So what kind of "load balancing" do you mean?
I'm confused.




----- Original Message -----
From: "Paul Emmerich" <paul.emmerich@xxxxxxxx>
To: "Philip Brown" <pbrown@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Wednesday, December 4, 2019 12:05:47 PM
Subject: Re: best pool usage for vmware backing

1 pool per storage class (e.g., SSD and HDD), at least one RBD per
gateway per pool for load balancing (failover-only load balancing
policy).

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Dec 4, 2019 at 8:51 PM Philip Brown <pbrown@xxxxxxxxxx> wrote:
>
> Lets say that you had roughly 60 OSDs that you wanted to use to provide storage for VMware, through RBDs served through iscsi.
>
> Target VM types are completely mixed. Web front ends, app tier.. a few databases.. and the kitchen sink.
> Estimated number of VMs: 50-200
> b
>
> How would people recommend the storage be divided up?
>
> The big questions are:
>
> * 1 pool, or multiple,and why
>
> * many RBDs, few RBDs, or single  RBD per pool? why?
>
>
>
>
>
>
>
> --
> Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> 5 Peters Canyon Rd Suite 250
> Irvine CA 92606
> Office 714.918.1310| Fax 714.918.1325
> pbrown@xxxxxxxxxx| www.medata.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux