Re: Health warning - POOL_TARGET_SIZE_BYTES_OVERCOMMITED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, pushed a little soon on the send button..

But in datacenter fail over modus the replication size changes to 2. And that's why I believe the RATIO should be 2 instead of 4 or the Raw Capacity should be doubled.
Am I wrong or should someone make a choice?
________________________________
From: Sake Paulusma <sake1989@xxxxxxxxxxx>
Sent: Monday, February 13, 2023 6:52:45 PM
To: Gregory Farnum <gfarnum@xxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Health warning - POOL_TARGET_SIZE_BYTES_OVERCOMMITED

Hey Greg,

I'm just analyzing this issue and it isn't strange the total cluster size is half the total size (or the smallest of both clusters). Because you shouldn't write more data to the cluster than the smallest datacenter can handle.
Second when in datacenter fail over modus, the cluster size

________________________________
From: Gregory Farnum <gfarnum@xxxxxxxxxx>
Sent: Monday, February 13, 2023 5:32:18 PM
To: Sake Paulusma <sake1989@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: Re:  Health warning - POOL_TARGET_SIZE_BYTES_OVERCOMMITED

On Mon, Feb 13, 2023 at 4:16 AM Sake Paulusma <sake1989@xxxxxxxxxxx> wrote:
>
> Hello,
>
> I configured a stretched cluster on two datacenters. It's working fine, except this weekend the Raw Capicity exceeded 50% and the error POOL_TARGET_SIZE_BYTES_OVERCOMMITED showed up.
>
> The command "ceph df" is showing the correct cluster size, but "ceph osd pool autoscale-status" is showing half of the total Raw Capacity.
>
> What could be wrong?

There's a bug with the statistics handling of pools in stretch mode,
and others like them. :(
https://tracker.ceph.com/issues/56650

-Greg


>
>
>
> ----------------------------------------------------------------------------
> [ceph: root@aqsoel11445 /]# ceph status
>   cluster:
>     id:     adbe7bb6-5h6d-11ed-8511-004449ede0c
>     health: HEALTH_WARN
>             1 MDSs report oversized cache
>             1 subtrees have overcommitted pool target_size_bytes
>
>   services:
>     mon: 5 daemons, quorum host1,host2,host3,host4,host5 (age 4w)
>     mgr: aqsoel11445.nqamuz(active, since 5w), standbys: host1.wujgas
>     mds: 2/2 daemons up, 2 standby
>     osd: 12 osds: 12 up (since 5w), 12 in (since 9w)
>
>   data:
>     volumes: 2/2 healthy
>     pools:   5 pools, 193 pgs
>     objects: 17.31M objects, 1.2 TiB
>     usage:   5.0 TiB used, 3.8 TiB / 8.8 TiB avail
>     pgs:     192 active+clean
>              1   active+clean+scrubbing
> ----------------------------------------------------------------------------
>
> ----------------------------------------------------------------------------
> [ceph: root@aqsoel11445 /]# ceph df
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
> ssd    8.8 TiB  3.8 TiB  5.0 TiB   5.0 TiB      56.83
> TOTAL  8.8 TiB  3.8 TiB  5.0 TiB   5.0 TiB      56.83
>
> --- POOLS ---
> POOL                       ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
> .mgr                        1    1  449 KiB        2  1.8 MiB      0    320 GiB
> cephfs.application-tst.meta   2   16  540 MiB   18.79k  2.1 GiB   0.16    320 GiB
> cephfs.application-tst.data   3   32  4.4 GiB    8.01k   17 GiB   1.33    320 GiB
> cephfs.application-acc.meta   4   16   11 GiB    3.54M   45 GiB   3.37    320 GiB
> cephfs.application-acc.data   5  128  1.2 TiB   13.74M  4.8 TiB  79.46    320 GiB
> ----------------------------------------------------------------------------
>
> ----------------------------------------------------------------------------
> [ceph: root@aqsoel11445 /]# ceph osd pool autoscale-status
> POOL                         SIZE  TARGET SIZE  RATE  RAW CAPACITY   RATIO  TARGET RATIO  EFFECTIVE RATIO  BIAS  PG_NUM  NEW PG_NUM  AUTOSCALE  BULK
> .mgr                       448.5k                4.0         4499G  0.0000                                  1.0       1              on         False
> cephfs.application-tst.meta  539.8M                4.0         4499G  0.0005                                  4.0      16              on         False
> cephfs.application-tst.data   4488M       51200M   4.0         4499G  0.0444                                  1.0      32              on         False
> cephfs.application-acc.meta  11430M                4.0         4499G  0.0099                                  4.0      16              on         False
> cephfs.application-acc.data   1244G                4.0         4499G  1.1062        1.0000           0.9556   1.0     128              on         False
> ----------------------------------------------------------------------------
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux