Re: Adding cache tier to an existing objectstore cluster possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These are the processes in the iotop in 1 node. I think it's compacting but it is always like this, never finish.


  59936 be/4 ceph        0.00 B/s   10.08 M/s  0.00 % 53.07 % ceph-osd -f --cluster ceph --id 46 --setuser ceph --setgroup ceph [bstore_kv_sync]
  66097 be/4 ceph        0.00 B/s    6.96 M/s  0.00 % 43.11 % ceph-osd -f --cluster ceph --id 48 --setuser ceph --setgroup ceph [bstore_kv_sync]
  63145 be/4 ceph        0.00 B/s    5.82 M/s  0.00 % 40.49 % ceph-osd -f --cluster ceph --id 47 --setuser ceph --setgroup ceph [bstore_kv_sync]
  51150 be/4 ceph        0.00 B/s    3.21 M/s  0.00 % 10.50 % ceph-osd -f --cluster ceph --id 43 --setuser ceph --setgroup ceph [bstore_kv_sync]
  53909 be/4 ceph        0.00 B/s    2.91 M/s  0.00 %  9.98 % ceph-osd -f --cluster ceph --id 44 --setuser ceph --setgroup ceph [bstore_kv_sync]
  57066 be/4 ceph        0.00 B/s    2.18 M/s  0.00 %  8.66 % ceph-osd -f --cluster ceph --id 45 --setuser ceph --setgroup ceph [bstore_kv_sync]
  36672 be/4 ceph        0.00 B/s    2.68 M/s  0.00 %  7.82 % ceph-osd -f --cluster ceph --id 42 --setuser ceph --setgroup ceph [bstore_kv_sync]

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Stefan Kooman <stefan@xxxxxx> 
Sent: Monday, September 20, 2021 2:13 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>; ceph-users <ceph-users@xxxxxxx>
Subject: Re:  Adding cache tier to an existing objectstore cluster possible?

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

On 9/20/21 06:15, Szabo, Istvan (Agoda) wrote:
> Hi,
>
> I'm running out of idea why my wal+db nvmes are maxed out always so thinking of I might missed the cache tiering in front of my 4:2 ec-pool. IS it possible to add it later?

Maybe I missed a post where you talked about WAL+DB being maxed out.
What Ceph version do you use? Maybe you suffer from issue #52244 which is ifxed in Pacific 16.2.6 with PR [1].

Gr. Stefan

[1]: https://github.com/ceph/ceph/pull/42773
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux