Re: Very uneven OSD utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would suggest enabling the upmap balancer if you haven't done that,
it should help even data out. Even if it would not do better than some
manual rebalancing scheme, it will at least do it nicely in the
background some 8 PGs at a time so it doesn't impact client traffic.

I looks very weird to have such uneven distribution even while having
lots of PGs (which was my first guess =)

Den tis 25 maj 2021 kl 03:47 skrev Sergei Genchev <sgenchev@xxxxxxxxx>:
>
> Hello,
> I am running a nautilus cluster with 5 OSD nodes/90 disks that is
> exclusively used for S3. My disks are identical, but utilization
> ranges from 9% to 82%, and I am starting to get backfill_toofull
> errors even though I have only used 150TB out of 650TB of data.
>  - Other than manually crush reweighting OSDs, is there any other
> option for me ?
>  - what would cause this uneven distribution? Is there some
> documentation on how to track down what's going on?
> output of 'ceph osd df" is at https://pastebin.com/17HWFR12
>  Thank you!
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux