Re: Heavy speed difference between rbd and custom pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 19 Jun 2012, Stefan Priebe wrote:
> Am 19.06.2012 um 17:42 schrieb Sage Weil <sage@xxxxxxxxxxx>:
> >> 
> >> But this number 2176 of PGs were set while doing mkcephfs - how is it
> >> calculated?
> > 
> >    num_pgs = num_osds << osd_pg_bits
> > 
> > which is configurable via --osd-pg-bits N or ceph.conf (at mkcephfs time).  
> > The default is 6.
>
> What happens if I add more osds later?

Currently, nothing.  The existing PGs are spread out among a larger number 
of OSDs.  This is partly why the default shoots a bit high.

One of the upcoming items on the todo list is to finish PG 
splitting/merging, which will allow a pool to be resharded into more or 
less PGs so that the data distribution can be adjusted as the cluster 
grows or shrinks.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux