Re: MAX AVAIL goes up when I reboot an OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have seen this when there is one OSD on the node being rebooted that
is using more space than the others. Max avail for the pool is based
on the fullest OSD as far as I know.

On Sun, Jun 14, 2020 at 4:29 PM KervyN <bb@xxxxxxxxx> wrote:
>
> Does someone got any ideas on this?
>
> The mgr nodes are separate, pg_autoscaler is also not active (I don‘t know how the impact will be on a 1pb storage), and it also happens when I turn of an osd service on any node.
>
> it’s the latest ceph nautilus.
>
> Cheers
>  - Boris
>
> > Am 28.05.2020 um 23:42 schrieb Boris Behrens <bb@xxxxxxxxx>:
> >
> > Dear people on this mailing list,
> >
> > I've got the "problem" that our MAX AVAIL value  increases by about
> > 5-10 TB when I reboot a whole OSD node. After the reboot the value
> > goes back to normal.
> >
> > I would love to know WHY.
> >
> > Under normal circumstances I would ignore this behavior, but because I
> > am very new to the whole ceph software I would like to know why stuff
> > like this happens.
> > What I read is, that this value will be calculated by the most filled OSD.
> >
> > I've set noout and norebalance while the node is offline and I unset
> > both values after the reboot.
> >
> > We are currently on nautilus.
> >
> > Cheers and thanks in advance
> > Boris
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux