Re: PG distribution scattered

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 10 Oct 2013 03:57:17 -0700 (PDT), Sage Weil wrote:
> […]
I suspect there are a few things going on.

First, the new 'hashpspool' pool flag is not default (yet) but will make
it so new pools don't line up on top of old pools and amplify any
imbalance. The ability to add the flag to an existing pool hasn't been
merged yet, but new pools can get it if you put

	osd pool default flag hashpspool = true

in your [mon] section and restart the mons.

There is also a function call 'reweight-by-utilization' that will make
minor adjustments to the (post-crush) weights to correct for the
inevitable stastical variation.  Try running

	ceph osd reweight-by-utilization 110

and it will adjust any OSD more than 10% above the mean.

Also not that these utilization will be a bit noisy until there are a lot of objects in the systems; the reweight is based on bytes used and not PGs, so don't run it until you have written a fair bit of data to ceph.

sage


Thank you for that quick answer!

The way I read it, the 'hashpool' flag can compensate for the problem if you have multiple pools. This will probably help in many cases but not for my setup. I only have one large pool, so this doesn't really solve it for me. Reweighing the OSDs seems to be more of a work-around than a fix. I wouldn't want to use it in a productive environment…


Coming back to the distribution of the PGs:
What distribution is to be expected from rjenkins? I get up to 123 PGs per OSD, where 100 PGs per OSD is the average. Can I use different parameters on creating the pool to get better results? Is there a different algorithm, other than rjenkins which can be considered stable?

Thank you very much for your help!

Niklas
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux