Re: PG autoscaler taking too long

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Arihant,
Did you delete alot of snapshots?


Joachim

www.clyso.com

Hohenzollernstr. 27, 80801 Munich

Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306

AJ_ sunny <jains8550@xxxxxxxxx> schrieb am Do., 24. Okt. 2024, 05:34:

> Hello team,
>
> Any update on this?
> Why autoscaler taking so long or too slow
>
> Thanks
> Arihant Jain
>
> On Wed, 23 Oct, 2024, 7:41 pm AJ_ sunny, <jains8550@xxxxxxxxx> wrote:
>
> > Hello team,
> >
> > I have one small size ceph cluster in production having 53 SSD of 7TB
> each
> > with 6 node
> >
> > Version:- octopus
> >
> > In last two days we just cleared out ~100 tb of data  out of 370tb total
> > size
> >
> > So there is bunch of pg with active+clean+snaptrim  & snaptrim_wait state
> > comes into the action after that one of the pool placement group number
> > decrease by autoscaler from 2048 to 512
> > And autoscaler showing remaining time of 4y which is too long can you
> guys
> > help me out how to fix this issue
> >
> > root@nvme1:~# ceph -s
> >   cluster:
> >     id:     c30f5720-ca5c-11ec-b19c-f9781f61e8ec
> >     health: HEALTH_WARN
> >             noscrub,nodeep-scrub flag(s) set
> >             34 pgs not deep-scrubbed in time
> >
> >   services:
> >     mon: 3 daemons, quorum nvme3,nvme2,nvme1 (age 14M)
> >     mgr: nvme2.ttfore(active, since 13M), standbys: nvme3.zhnsuf
> >     osd: 53 osds: 53 up (since 2h), 53 in (since 11d); 61 remapped pgs
> >          flags noscrub,nodeep-scrub
> >
> >   task status:
> >
> >   data:
> >     pools:   7 pools, 2784 pgs
> >     objects: 26.37M objects, 69 TiB
> >     usage:   177 TiB used, 194 TiB / 370 TiB avail
> >     pgs:     1011732/79097316 objects misplaced (1.279%)
> >              2723 active+clean
> >              58   active+remapped+backfill_wait
> >              3    active+remapped+backfilling
> >
> >   io:
> >     client:   78 KiB/s rd, 3.8 MiB/s wr, 238 op/s rd, 314 op/s wr
> >     recovery: 17 MiB/s, 6 objects/s
> >
> >   progress:
> >     PG autoscaler decreasing pool 8 PGs from 2048 to 512 (1h)
> >       [............................] (remaining: 4y)
> >
> > root@nvme1:~#
> >
> >
> > Thanks
> > Arihant Jain
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux