Re: Reading from replica

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't recall. It might be that I started with PG=2.
Trying get an even distribution of PGs accros my 2 OSDs now. Tried different numbers (keeping pgp_num same as pg_num :) but I keep getting one OSD with more PGs compared to the other. Since this is just for learning/testing I would like to somehow make it evenly distributed. What's the easiest/quickest way to accomplish that ? (if possible)
 
Also, is there a command to show the space used on each OSD by each pool ? I found the space by pool, or by OSD, but no easy way to combine the 2 ways of looking at space used
 
Have a nice day,
Dani
 
> Date: Wed, 28 Aug 2013 13:28:29 -0700
> Subject: Re: Reading from replica
> From: greg@xxxxxxxxxxx
> To: daniel_pol@xxxxxxxxxxx
> CC: ceph-users@xxxxxxxxxxxxxx
>
> On Wed, Aug 28, 2013 at 1:22 PM, daniel pol <daniel_pol@xxxxxxxxxxx> wrote:
> > Sorry, my bad. Only my second post and forgot the "reply all"
> >
> > Thanks for the info. I'm looking at the impact of pg number on performance.
> > Just trying to learn more about how Ceph works.
> > I didn't set pgp_num. It came by default with 2 in my case.
>
> Did you start the pool with 2 PGs? If not, that's...odd. You can
> update it with "ceph osd pool set" (see
> http://ceph.com/docs/master/rados/operations/control/).
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux