I don't recall. It might be that I started with PG=2.
Trying get an even distribution of PGs accros my 2 OSDs now. Tried different numbers (keeping pgp_num same as pg_num :) but I keep getting one OSD with more PGs compared to the other. Since this is just for learning/testing I would like to somehow make it evenly distributed. What's the easiest/quickest way to accomplish that ? (if possible)
Distribution is entirely pseudorandom, so there's not a command to force-balance or anything. If you set it to 200 PGs with pgp_num 200 I doubt you'll notice a big difference in the distribution though -- especially since with two nodes they'll both get all data!
Also, is there a command to show the space used on each OSD by each pool ? I found the space by pool, or by OSD, but no easy way to combine the 2 ways of looking at space used
Nope, sorry. That kind of detail is a bit counter to the point of using Ceph!
-Greg
Have a nice day,
Dani
> Date: Wed, 28 Aug 2013 13:28:29 -0700
> Subject: Re: [ceph-users] Reading from replica
> From: greg@xxxxxxxxxxx
> To: daniel_pol@xxxxxxxxxxx
> CC: ceph-users@xxxxxxxxxxxxxx
>
> On Wed, Aug 28, 2013 at 1:22 PM, daniel pol <daniel_pol@xxxxxxxxxxx> wrote:
> > Sorry, my bad. Only my second post and forgot the "reply all"
> >
> > Thanks for the info. I'm looking at the impact of pg number on performance.
> > Just trying to learn more about how Ceph works.
> > I didn't set pgp_num. It came by default with 2 in my case.
>
> Did you start the pool with 2 PGs? If not, that's...odd. You can
> update it with "ceph osd pool set" (see
> http://ceph.com/docs/master/rados/operations/control/).
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com