Re: MDS Performance and PG/PGP value

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/7/22 16:50, Yoann Moulin wrote:


By the way, since I have set PG=256, I have much less SLOW requests than before, even I still have, the impact on my users has been reduced a lot.

# zgrep -c -E 'WRN.*(SLOW_OPS|SLOW_REQUEST|MDS_SLOW_METADATA_IO)' floki.log.4.gz floki.log.3.gz floki.log.2.gz floki.log.1.gz floki.log floki.log.4.gz:6883
floki.log.3.gz:11794
floki.log.2.gz:3391
floki.log.1.gz:1180
floki.log:122

If I have the opportunity, I will try to run some benchmark with multiple value of the PG on cephfs_metadata pool.

Two more things I want to add:

- After PG splitting / rebalancing: do a "OSD compaction" of all your OSDs to optimize your RocksDB (really important: ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-$osd-id compact <- when OSD is not running)

- How it the distribution of your CephFS primary PGs? You can check with this AWK magic (not mine btw, but it's in our Ceph cheatsheet):


ceph pg dump | awk '
BEGIN { IGNORECASE = 1 }
 /^PG_STAT/ { col=1; while($col!="UP") {col++}; col++ }
/^[0-9a-f]+\.[0-9a-f]+/ { match($0,/^[0-9a-f]+/); pool=substr($0, RSTART, RLENGTH); poollist[pool]=0; up=$col; i=0; RSTART=0; RLENGTH=0; delete osds; while(match(up,/[0-9]+/)>0) { osds[++i]=substr(up,RSTART,RLENGTH); up = substr(up, RSTART+RLENGTH) }
 for(i in osds) {array[osds[i],pool]++; osdlist[osds[i]];}
}
END {
 printf("\n");
printf("pool :\t"); for (i in poollist) printf("%s\t",i); printf("| SUM \n");
 for (i in poollist) printf("--------"); printf("----------------\n");
 for (i in osdlist) { printf("osd.%i\t", i); sum=0;
for (j in poollist) { printf("%i\t", array[i,j]); sum+=array[i,j]; sumpool[j]+=array[i,j] }; printf("| %i\n",sum) }
 for (i in poollist) printf("--------"); printf("----------------\n");
printf("SUM :\t"); for (i in poollist) printf("%s\t",sumpool[i]); printf("|\n");
}'

If some OSDs are more loaded with primaries than others, that might be a bottleneck sometimes.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux