Re: CEPH Cluster performance review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mosharaf,

There is an automated service available that will criticize your cluster:

https://analyzer.clyso.com/#/analyzer

On Sun, Nov 12, 2023 at 12:03 PM Mosharaf Hossain <
mosharaf.hossain@xxxxxxxxxxxxxx> wrote:

> Hello Community
>
> Currently, I operate a CEPH Cluster utilizing Ceph Octopus version 1.5.2.7,
> installed through Ansible. The challenge I'm encountering is that, during
> scrubbing, OSD latency spikes to 300-600 ms, resulting in sluggish
> performance for all VMs.
> Additionally, some OSDs fail during the scrubbing process. In such
> instances, promptly halting the scrubbing resolves the issue.
>
> *Summary of CEPH Version*:
> CEPH Version Number of Nodes Node Networking OSD OSD Total pools PG Size
> 15.2.7 12
> (6 SSD node + 6 HDD node) All nodes are connected through  10G bonded link,
> i.e. 10Gx2=20GB for each node.  64 SSD
> 42 HDD 106 one-ssd 256 active+clean
> one-hdd 512 active+clean
> cloudstack.hdd 512 active+clean
>
> I intend to enlarge the PG size for the "one-ssd" configuration. Please
> provide the PG number, and suggest the optimal approach to increase the PG
> size without causing any slowdown or service disruptions to the VMs.
>
> Your expertise and guidance on this matter would be highly valuable, and
> I'm eager to benefit from the collective knowledge of the Ceph community.
>
> Thank you in advance for your time and assistance. I look forward to
> hearing from you.
>
> *CEPH Health status:*
> root@mon1:~# ceph -s
>   cluster:
>     id:     f8096ec7-51db-4557-85e6-57d7fdfe9423
>     health: HEALTH_WARN
>             nodeep-scrub flag(s) set
>             656 pgs not deep-scrubbed in time
>
>   services:
>     mon:     3 daemons, quorum ceph2,mon1,ceph6 (age 4d)
>     mgr:     ceph4(active, since 2w), standbys: mon1, ceph3, ceph6, ceph1
>     mds:     cephfs:1 {0=ceph8=up:active} 1 up:standby
>     osd:     107 osds: 105 up (since 3h), 105 in (since 3d)
>              flags nodeep-scrub
>     rgw:     4 daemons active (ceph10.rgw0, ceph7.rgw0, ceph9.rgw0,
> mon1.rgw0)
>     rgw-nfs: 2 daemons active (ceph7, ceph9)
>
>   task status:
>
>   data:
>     pools:   13 pools, 2057 pgs
>     objects: 9.40M objects, 35 TiB
>     usage:   106 TiB used, 154 TiB / 259 TiB avail
>     pgs:     2057 active+clean
>
>   io:
>     client:   14 MiB/s rd, 30 MiB/s wr, 1.50k op/s rd, 1.53k op/s wr
>
>
> root@ceph1:~# ceph df
> --- RAW STORAGE ---
> CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
> hdd    151 TiB   78 TiB   72 TiB    73 TiB      48.04
> ssd    110 TiB   78 TiB   32 TiB    32 TiB      29.42
> TOTAL  261 TiB  156 TiB  104 TiB   105 TiB      40.19
>
> --- POOLS ---
> POOL                        ID  PGS  STORED   OBJECTS  USED     %USED  MAX
> AVAIL
> cephfs_data                  1   64  3.8 KiB        0   11 KiB      0
> 23 TiB
> cephfs_metadata              2    8  228 MiB       79  685 MiB      0
> 23 TiB
> .rgw.root                    3   32  6.0 KiB        8  1.5 MiB      0
> 23 TiB
> default.rgw.control          4   32      0 B        8      0 B      0
> 23 TiB
> default.rgw.meta             5   32   12 KiB       48  7.5 MiB      0
> 23 TiB
> default.rgw.log              6   32  4.8 KiB      207  6.0 MiB      0
> 23 TiB
> default.rgw.buckets.index    7   32  410 MiB       15  1.2 GiB      0
> 23 TiB
> default.rgw.buckets.data     8  512  4.6 TiB    1.29M   14 TiB  16.59
> 23 TiB
> default.rgw.buckets.non-ec   9   32  1.0 MiB      676  130 MiB      0
> 23 TiB
> one-hdd                     10  512  9.2 TiB    2.45M   28 TiB  28.69
> 23 TiB
> device_health_metrics       11    1  9.5 MiB      113   28 MiB      0
> 23 TiB
> one-ssd                     12  256   11 TiB    2.88M   32 TiB  31.37
> 23 TiB
> cloudstack.hdd              15  512   10 TiB    2.72M   31 TiB  30.94
> 23 TiB
>
>
>
> Regards
> Mosharaf Hossain
> Manager, Product Development
> IT Division
>
> Bangladesh Export Import Company Ltd.
>
> Level-8, SAM Tower, Plot #4, Road #22, Gulshan-1, Dhaka-1212,Bangladesh
>
> Tel: +880 9609 000 999, +880 2 5881 5559, Ext: 14191, Fax: +880 2 9895757
>
> Cell: +8801787680828, Email: mosharaf.hossain@xxxxxxxxxxxxxx, Web:
> www.bol-online.com
> <
> https://www.google.com/url?q=http://www.bol-online.com&sa=D&source=hangouts&ust=1557908951423000&usg=AFQjCNGMxIuHSHsD3qO6y5JddpEZ0S592A
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Alexander E. Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux