Re: CEPH zero iops after upgrade to Reef and manual read balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Laura
I have created the tracker and you can find it on
https://tracker.ceph.com/issues/62836
Please find the OSD map as below on the cluster


*root@cph2n1:/# ceph -s*  cluster:
    id:     e5f5ec6e-0b1b-11ec-adc5-35a84c0db1fb
    health: HEALTH_WARN
            65 pgs not deep-scrubbed in time

  services:
    mon: 5 daemons, quorum cph2n1,cph2n2,cph2n3,cph2n5,cph2n4 (age 17h)
    mgr: cph2n6.kijcmg(active, since 17h), standbys: cph2n5.czgzpy,
cph2n2.dqlmbn, cph2n4.gppcct, cph2n1.ezsddw
    osd: 56 osds: 56 up (since 8h), 56 in (since 8d)

  data:
    pools:   12 pools, 1913 pgs
    objects: 5.04M objects, 19 TiB
    usage:   57 TiB used, 98 TiB / 154 TiB avail
    pgs:     1913 active+clean

  io:
    client:   13 KiB/s rd, 2.8 MiB/s wr, 33 op/s rd, 44 op/s wr



*root@cph2n1:/# ceph osd tree*ID   CLASS     WEIGHT     TYPE NAME
 STATUS  REWEIGHT  PRI-AFF
 -1            154.42514  root default
 -3             24.59732      host cph2n1
  6   ent_ssd    3.49309          osd.6        up   1.00000  1.00000
  7   ent_ssd    3.49309          osd.7        up   1.00000  1.00000
  8   ent_ssd    3.49309          osd.8        up   1.00000  1.00000
  9   ent_ssd    3.49309          osd.9        up   1.00000  1.00000
 32   ent_ssd    3.49309          osd.32       up   1.00000  1.00000
 37   ent_ssd    3.49309          osd.37       up   1.00000  1.00000
  0  nvme_ssd    1.81940          osd.0        up   1.00000  1.00000
  1  nvme_ssd    1.81940          osd.1        up   1.00000  1.00000
 -5             24.59732      host cph2n2
 10   ent_ssd    3.49309          osd.10       up   1.00000  1.00000
 11   ent_ssd    3.49309          osd.11       up   1.00000  1.00000
 12   ent_ssd    3.49309          osd.12       up   1.00000  1.00000
 13   ent_ssd    3.49309          osd.13       up   1.00000  1.00000
 31   ent_ssd    3.49309          osd.31       up   1.00000  1.00000
 35   ent_ssd    3.49309          osd.35       up   1.00000  1.00000
  2  nvme_ssd    1.81940          osd.2        up   1.00000  1.00000
  3  nvme_ssd    1.81940          osd.3        up   1.00000  1.00000
 -7             24.59732      host cph2n3
 14   ent_ssd    3.49309          osd.14       up   1.00000  1.00000
 15   ent_ssd    3.49309          osd.15       up   1.00000  1.00000
 16   ent_ssd    3.49309          osd.16       up   1.00000  1.00000
 17   ent_ssd    3.49309          osd.17       up   1.00000  1.00000
 30   ent_ssd    3.49309          osd.30       up   1.00000  1.00000
 36   ent_ssd    3.49309          osd.36       up   1.00000  1.00000
  4  nvme_ssd    1.81940          osd.4        up   1.00000  1.00000
  5  nvme_ssd    1.81940          osd.5        up   1.00000  1.00000
-21             28.09041      host cph2n4
 22   ent_ssd    3.49309          osd.22       up   1.00000  1.00000
 24   ent_ssd    3.49309          osd.24       up   1.00000  1.00000
 26   ent_ssd    3.49309          osd.26       up   1.00000  1.00000
 29   ent_ssd    3.49309          osd.29       up   1.00000  1.00000
 33   ent_ssd    3.49309          osd.33       up   1.00000  1.00000
 39   ent_ssd    3.49309          osd.39       up   1.00000  1.00000
 41   ent_ssd    3.49309          osd.41       up   1.00000  1.00000
 20  nvme_ssd    1.81940          osd.20       up   1.00000  1.00000
 21  nvme_ssd    1.81940          osd.21       up   1.00000  1.00000
-17             28.09041      host cph2n5
 23   ent_ssd    3.49309          osd.23       up   1.00000  1.00000
 25   ent_ssd    3.49309          osd.25       up   1.00000  1.00000
 27   ent_ssd    3.49309          osd.27       up   1.00000  1.00000
 28   ent_ssd    3.49309          osd.28       up   1.00000  1.00000
 34   ent_ssd    3.49309          osd.34       up   1.00000  1.00000
 38   ent_ssd    3.49309          osd.38       up   1.00000  1.00000
 40   ent_ssd    3.49309          osd.40       up   1.00000  1.00000
 18  nvme_ssd    1.81940          osd.18       up   1.00000  1.00000
 19  nvme_ssd    1.81940          osd.19       up   1.00000  1.00000
-25             12.22618      host cph2n6
 42   ent_ssd    1.74660          osd.42       up   1.00000  1.00000
 43   ent_ssd    1.74660          osd.43       up   1.00000  1.00000
 44   ent_ssd    1.74660          osd.44       up   1.00000  1.00000
 45   ent_ssd    1.74660          osd.45       up   1.00000  1.00000
 49   ent_ssd    1.74660          osd.49       up   1.00000  1.00000
 50   ent_ssd    1.74660          osd.50       up   1.00000  1.00000
 51   ent_ssd    1.74660          osd.51       up   1.00000  1.00000
-29             12.22618      host cph2n7
 52   ent_ssd    1.74660          osd.52       up   1.00000  1.00000
 53   ent_ssd    1.74660          osd.53       up   1.00000  1.00000
 54   ent_ssd    1.74660          osd.54       up   1.00000  1.00000
 59   ent_ssd    1.74660          osd.59       up   1.00000  1.00000
 60   ent_ssd    1.74660          osd.60       up   1.00000  1.00000
 61   ent_ssd    1.74660          osd.61       up   1.00000  1.00000
 62   ent_ssd    1.74660          osd.62       up   1.00000  1.00000



Regards
Mosharaf Hossain
Manager, Product Development
IT Division

Bangladesh Export Import Company Ltd.

Level-8, SAM Tower, Plot #4, Road #22, Gulshan-1, Dhaka-1212,Bangladesh

Tel: +880 9609 000 999, +880 2 5881 5559, Ext: 14191, Fax: +880 2 9895757

Cell: +8801787680828, Email: mosharaf.hossain@xxxxxxxxxxxxxx, Web:
www.bol-online.com
<https://www.google.com/url?q=http://www.bol-online.com&sa=D&source=hangouts&ust=1557908951423000&usg=AFQjCNGMxIuHSHsD3qO6y5JddpEZ0S592A>



On Thu, Sep 14, 2023 at 4:38 AM Laura Flores <lflores@xxxxxxxxxx> wrote:

> Link the tracker on this list if you have it. You can create one under the
> RADOS project: https://tracker.ceph.com/projects/rados
>
> Thanks,
> Laura
>
> On Wed, Sep 13, 2023 at 4:35 PM Laura Flores <lflores@xxxxxxxxxx> wrote:
>
>> Hi Mosharaf,
>>
>> Can you please create a tracker issue and attach a copy of your osdmap?
>> Also, please include any other output that characterizes the slowdown in
>> client I/O operations you're noticing in your cluster. I can take a look
>> once I have that information,
>>
>> Thanks,
>> Laura
>>
>> On Wed, Sep 13, 2023 at 5:23 AM Mosharaf Hossain <
>> mosharaf.hossain@xxxxxxxxxxxxxx> wrote:
>>
>>> Hello Folks
>>> We've recently performed an upgrade on our Cephadm cluster, transitioning
>>> from Ceph Quiency to Reef. However, following the manual implementation
>>> of
>>> a read balancer in the Reef cluster, we've experienced a significant
>>> slowdown in client I/O operations within the Ceph cluster, affecting both
>>> client bandwidth and overall cluster performance.
>>>
>>> This slowdown has resulted in unresponsiveness across all virtual
>>> machines
>>> within the cluster, despite the fact that the cluster exclusively
>>> utilizes
>>> SSD storage."
>>>
>>> Kindly guide us to move forward.
>>>
>>>
>>>
>>> Regards
>>> Mosharaf Hossain
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>
>>>
>>
>> --
>>
>> Laura Flores
>>
>> She/Her/Hers
>>
>> Software Engineer, Ceph Storage <https://ceph.io>
>>
>> Chicago, IL
>>
>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>> M: +17087388804
>>
>>
>>
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage <https://ceph.io>
>
> Chicago, IL
>
> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
> M: +17087388804
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux