Re: OSDs are not utilized evenly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

looks like TheJJ balancer solved the issue!

Thx!


On 11/9/22 13:35, Denis Polom wrote:
Hi Stefan,

thank you for help. Looks very interesting and command you sent helps to have better insight on that. Still wandering why some of OSDs keeps primary for more PGs as others. I was thinking that balancer and CRUSH should take care of that.

I will try balancer you sent a link for and will post result. But this will take more time as first I have to test it on some non-production Ceph.

Thx!


On 11/9/22 08:20, Stefan Kooman wrote:
On 11/1/22 13:45, Denis Polom wrote:
Hi

I observed on my Ceph cluster running latest Pacific that same size OSDs are utilized differently even if balancer is running and reports status as perfectly balanced.


That might be true because the primary PGs are not evenly balanced. You can check that with: ceph pg dump. The last output is the overview for how many PGs an OSD is primary for. To get more detail by pool you can run this (source: unknown, but it works :-)):

"ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
 /^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0;  up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
 printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
   for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
 printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'"

11/15/2022 14:35 UTC there is a talk about this: New workload balancer in Ceph (Ceph virtual 2022).

The balancer made by Jonas Jelten works very well for us (though does not balance primary PGs): https://github.com/TheJJ/ceph-balancer. It outperforms the ceph-balancer module by far. And had faster convergence. This is true up to and including octopus release.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux