PG Calculation query

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

We are facing some performance issue with rados bench marking on a 5 node cluster with PG num 4096 vs 8192.

As per the PG calculation  below is our specification

Size   OSD   % Data Targets PG count 
53401001008192 
5340100504096


With 8192 PG count we got good performance with 4096 compared to 8192

With PG count - 4096 -->>
====================

Filesize
256000
512000
1024000
2048000
4096000
12288000
Write Bandwidth MB/sec1448.382503.983941.425354.75333.95271.16
Read Bandwidth MB/sec2924.833417.94236.654469.44602.654584.6
WRITE Average Latency seconds0.0883550.1022140.1298550.1911550.3776851.13953
WRITE Maximum Latency  seconds0.2801640.4853911.1595313.517527.987686.3103
READ Average Latency seconds0.04371880.07476440.1206040.2285350.4365661.30415
READ Maximum Latency  seconds1.130673.215482.997344.084299.022416.6047

Average IOPS..

#grep "op/s" cephio_0%.txt | awk 'NF { print $(NF - 1) }'| awk '{ total += $0 } END { print total/NR }'

7517.49  -->>



With PG count - 8192 -->>
====================

Filesize
256000
512000
1024000
2048000
4096000
12288000
Write Bandwidth MB/sec 534.7491020.491864.583100.924717.235251.76
Read Bandwidth MB/sec 1615.562764.254061.554265.394229.384042.18
WRITE Average Latency seconds 0.2392630.2507690.274480.3289810.4270561.14352
WRITE Maximum Latency  seconds9.2175210.335310.813211.213512.549744.8133
READ Average Latency seconds0.07918220.09251670.125830.2395710.4751981.47916
READ Maximum Latency  seconds2.010212.291393.604563.84357.4375537.6106


#grep "op/s" cephio_0%.txt | awk 'NF { print $(NF - 1) }'| awk '{ total += $0 } END { print total/NR }'
4970.26


With 4096 PG - Average IOPS - 7517
With 8192 PG - Average IOPS - 4970


For smaller bits with 8192, the performance is badly affected. As per our test we are not adding any nodes in future. We mostly select 'Targets per OSD' as 100 instead of 200/300. 

Awaiting for comments to how to suit the best PG count as per the cluster size or how to choose appropriate PG count. 

ENV:-

Kraken - 11.2.0 - bluestore EC 4+1
RHEL 7.3
3.10.0-514.10.2.el7.x86_64
5 node - 5x68 - 340 OSD

Thanks

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux