Re: Free space in ec-pool should I worry?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, just follow the autoscaler at the moment, it suggested 128, might enable later the balancer, just scare a bit due to negative feedbacks about it.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2021. Nov 1., at 19:29, Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

To expand on the comments below, "max avail" takes into account usage
imbalance between OSDs. There's a pretty significant imbalance in this
cluster and Ceph assumes that the imbalance will continue, and thus
indicates that there's not much room left in the pool. Rebalancing
that pool will make a big difference in terms of top-OSD fullness and
the "max avail" metric.

Josh

On Mon, Nov 1, 2021 at 12:25 PM Alexander Closs <acloss@xxxxxxxxxxxxx> wrote:

Max available = free space actually usable now based on OSD usage, not including already-used space.

-Alex
MIT CSAIL

On 11/1/21, 2:18 PM, "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx> wrote:

   It says max available: 115TB and current use is 104TB, what I don’t understand where the max available come from because on the pool no object and no size limit is set:

   quotas for pool 'sin.rgw.buckets.data':
     max objects: N/A
     max bytes  : N/A

   Istvan Szabo
   Senior Infrastructure Engineer
   ---------------------------------------------------
   Agoda Services Co., Ltd.
   e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
   ---------------------------------------------------

   On 2021. Nov 1., at 18:48, Etienne Menguy <etienne.menguy@xxxxxxxx> wrote:

   sin.rgw.buckets.data    24  128  104 TiB  104 TiB      0 B    1.30G  156 TiB  156 TiB      0 B  47.51    115 TiB  N/A            N/A           1.30G         0 B          0 B
   _______________________________________________
   ceph-users mailing list -- ceph-users@xxxxxxx
   To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux