Re: Advice on balancing data across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tim,

Ah, it didn't sink in for me at first how many pools there were here.
I think you might be hitting the issue that the author of
https://github.com/TheJJ/ceph-balancer ran into, and thus their
balancer might help in this case.

Josh

On Mon, Oct 24, 2022 at 8:37 AM Tim Bishop <tim-lists@xxxxxxxxxxx> wrote:
>
> Hi Josh,
>
> On Mon, Oct 24, 2022 at 07:20:46AM -0600, Josh Baergen wrote:
> > > I've included the osd df output below, along with pool and crush rules.
> >
> > Looking at these, the balancer module should be taking care of this
> > imbalance automatically. What does "ceph balancer status" say?
>
> # ceph balancer status
> {
>     "active": true,
>     "last_optimize_duration": "0:00:00.038795",
>     "last_optimize_started": "Mon Oct 24 15:35:43 2022",
>     "mode": "upmap",
>     "optimize_result": "Optimization plan created successfully",
>     "plans": []
> }
>
> Looks healthy?
>
> This cluster is on pacific but has been upgraded through numerous
> previous releases, so it is possible some settings have been inherited
> and are not the same defaults as a new cluster.
>
> Tim.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux