Fun probably useless QMC PG distribution simulation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

In my spare time I've started playing around with an idea I've been kicking around since the Inktank days. Basically I wanted to see what would happen if I tried to use a quasi-monte-carlo method like a Halton Sequence for distributing PGs.

The current toy code is here:

https://github.com/markhpc/pghalton

So the good news is that as expected, the distribution quality is fantastic, even at low PG counts. Remapping is inexpensive so long as the bucket count is near what was specified in the original mapping, but every bucket removal (or reinsertion) increases the remapping cost by 1/<bucket count>. IE if you have 70/100 OSDs out, and 1 comes back up, you have ~30% data movement, the same cost in fact if 30 OSDs came back up. Adding new buckets is also going to be difficult, probably requiring a doubling of the buckets and then marking some of them out to avoid remapping the entire sequence.

I think it would be fairly easy to re-partition the space in this approach to allow for arbitrary weighting and you could probably do something vaguely crush like with hierarchical placement. The data movement problem is the big issue. I suspect you could do some kind of fancy tree structure to reduce the remapping cost, but I don't think it would every be as good as crush.

Anyway, thought people might interesting in playing with it and maybe it will get someone's noodle going to think up other exotic ideas. :)

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux