Re: forceful remap PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just move one PG away from the OSD, but the diskspace will not get freed.
Do I need to do something to clean obsolete objects from the osd?

Am Di., 30. März 2021 um 11:47 Uhr schrieb Boris Behrens <bb@xxxxxxxxx>:

> Hi,
> I have a couple OSDs that currently get a lot of data, and are running
> towards 95% fillrate.
>
> I would like to forcefully remap some PGs (they are around 100GB) to more
> empty OSDs and drop them from the full OSDs. I know this would lead to
> degraded objects, but I am not sure how long the cluster will stay in a
> state where it can allocate objects.
>
> OSD.105 grew from around 85% to 92% in the last 4 hours.
>
> This is the current state
>   cluster:
>     id:     dca79fff-ffd0-58f4-1cff-82a2feea05f4
>     health: HEALTH_WARN
>             noscrub,nodeep-scrub flag(s) set
>             9 backfillfull osd(s)
>             19 nearfull osd(s)
>             37 pool(s) backfillfull
>             BlueFS spillover detected on 1 OSD(s)
>             13 large omap objects
>             Low space hindering backfill (add storage if this doesn't
> resolve itself): 248 pgs backfill_toofull
>             Degraded data redundancy: 18115/362288820 objects degraded
> (0.005%), 1 pg degraded, 1 pg undersized
>
>   services:
>     mon: 3 daemons, quorum ceph-s3-mon1,ceph-s3-mon2,ceph-s3-mon3 (age 6d)
>     mgr: ceph-mgr2(active, since 6d), standbys: ceph-mgr3, ceph-mgr1
>     mds:  3 up:standby
>     osd: 110 osds: 110 up (since 4d), 110 in (since 6d); 324 remapped pgs
>          flags noscrub,nodeep-scrub
>     rgw: 4 daemons active (admin, eu-central-1, eu-msg-1, eu-secure-1)
>
>   task status:
>
>   data:
>     pools:   37 pools, 4032 pgs
>     objects: 120.76M objects, 197 TiB
>     usage:   620 TiB used, 176 TiB / 795 TiB avail
>     pgs:     18115/362288820 objects degraded (0.005%)
>              47144186/362288820 objects misplaced (13.013%)
>              3708 active+clean
>              241  active+remapped+backfill_wait+backfill_toofull
>              63   active+remapped+backfill_wait
>              11   active+remapped+backfilling
>              6    active+remapped+backfill_toofull
>              1    active+remapped+backfilling+forced_backfill
>              1    active+remapped+forced_backfill+backfill_toofull
>              1    active+undersized+degraded+remapped+backfilling
>
>   io:
>     client:   23 MiB/s rd, 252 MiB/s wr, 347 op/s rd, 381 op/s wr
>     recovery: 194 MiB/s, 112 objects/s
> ---
> ID  CLASS WEIGHT    REWEIGHT SIZE    RAW USE DATA    OMAP     META
>  AVAIL    %USE  VAR  PGS STATUS TYPE NAME
>  -1       795.42548        - 795 TiB 620 TiB 582 TiB   82 GiB 1.4 TiB  176
> TiB 77.90 1.00   -        root default
>  84   hdd   7.52150  1.00000 7.5 TiB 6.8 TiB 6.5 TiB  158 MiB  15 GiB  764
> GiB 90.07 1.16 121     up         osd.84
>  79   hdd   3.63689  1.00000 3.6 TiB 3.3 TiB 367 GiB  1.9 GiB     0 B  367
> GiB 90.15 1.16  64     up         osd.79
>  70   hdd   7.27739  1.00000 7.3 TiB 6.6 TiB 6.5 TiB  268 MiB  15 GiB  730
> GiB 90.20 1.16 121     up         osd.70
>  82   hdd   3.63689  1.00000 3.6 TiB 3.3 TiB 364 GiB  1.1 GiB     0 B  364
> GiB 90.23 1.16  59     up         osd.82
>  89   hdd   7.52150  1.00000 7.5 TiB 6.8 TiB 6.6 TiB  395 MiB  16 GiB  735
> GiB 90.45 1.16 126     up         osd.89
>  90   hdd   7.52150  1.00000 7.5 TiB 6.8 TiB 6.6 TiB  338 MiB  15 GiB  723
> GiB 90.62 1.16 112     up         osd.90
>  33   hdd   3.73630  1.00000 3.7 TiB 3.4 TiB 3.3 TiB  382 MiB 8.6 GiB  358
> GiB 90.64 1.16  66     up         osd.33
>  66   hdd   7.27739  0.95000 7.3 TiB 6.7 TiB 6.7 TiB  313 MiB  16 GiB  605
> GiB 91.88 1.18 122     up         osd.66
>  46   hdd   7.27739  1.00000 7.3 TiB 6.7 TiB 6.7 TiB  312 MiB  16 GiB  601
> GiB 91.93 1.18 119     up         osd.46
> 105   hdd   3.63869  0.89999 3.6 TiB 3.4 TiB 3.4 TiB  206 MiB 8.1 GiB  281
> GiB 92.45 1.19  58     up         osd.105
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux