Re: After adding New Osd's, Pool Max Avail did not changed.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

Could you post the output of "ceph osd df tree"? I would highly
suspect that this is a result of imbalance, and that's the easiest way
to see if that's the case. It would also confirm that the new disks
have taken on PGs.

Josh

On Tue, Aug 31, 2021 at 10:50 AM mhnx <morphinwithyou@xxxxxxxxx> wrote:
>
> I'm using Nautilus 14.2.16
>
> I was have 20 ssd OSD in my cluster and I added 10 more. " Each SSD=960GB"
> The Size increased to *(26TiB)* as expected but the Replicated (3) Pool Max
> Avail didn't changed *(3.5TiB)*.
> I've increased pg_num and PG rebalance is also done.
>
> Do I need any special treatment to expand the pool Max Avail?
>
> CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
>     hdd       2.7 PiB     1.0 PiB     1.6 PiB      1.6 PiB         61.12
>     ssd        *26 TiB*      18 TiB     2.8 TiB      8.7 TiB         33.11
>     TOTAL     2.7 PiB     1.1 PiB     1.6 PiB      1.7 PiB         60.85
>
> POOLS:
>     POOL                        ID     PGS      STORED      OBJECTS
>  USED        %USED     MAX AVAIL
>     xxx.rgw.buckets.index      54      128     541 GiB     435.69k     541
> GiB      4.82       *3.5 TiB*
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux