jj's "improved" ceph balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I've been working on this for quite some time now and I think it's ready for some broader testing and feedback.

https://github.com/TheJJ/ceph-balancer

It's an alternative standalone balancer implementation, optimizing for equal OSD storage utilization and PG placement across all pools.

It doesn't change your cluster in any way, it just prints the commands you can run to apply the PG movements.
Please play around with it :)

Quickstart example: generate 10 PG movements on hdd to stdout

    ./placementoptimizer.py -v balance --max-pg-moves 10 --only-crushclass hdd | tee /tmp/balance-upmaps

When there's remapped pgs (e.g. by applying the above upmaps), you can inspect progress with:

    ./placementoptimizer.py showremapped
    ./placementoptimizer.py showremapped --by-osd

And you can get a nice Pool and OSD usage overview:

    ./placementoptimizer.py show --osds --per-pool-count --sort-utilization


Of course there's many more features and optimizations to be added,
but it served us very well in reclaiming terrabytes of until then unavailable storage already where the `mgr balancer` could no longer optimize.

What do you think?

Cheers
  -- Jonas
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux