Re: Does the number of PGs affect the total usable size of a pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the feedback!

Yes, the Heath_ok is there.]

The OSD status show all of them as "exists,up".

The interesting part is that "ceph df" shows the correct values in the "RAW
STORAGE" section. However, for the SSD pool I have, it shows only the
previous value as the max usable value.
I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
with replica as 3. Therefore, I had about 128TiB as possible usable space
for the pool before. Now that I added a new node, I would expect 480 RAW
space, which is what I have in the RAW STORAGE section, but the usable
space to be used in the pool has not changed. I would expect the usable
space to grow at about 160TiB. I know that these limits will never be
reached as we have locks in 85%-90% for each OSD.

On Thu, Feb 13, 2025 at 8:41 AM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

> I still think they are not part of the cluster somehow. "ceph osd status"
> shows most likely they are not used. When you add just 1 osd you should see
> something in your cluster capacity and some rebalancing. Status of ceph is
> HEALTH_OK?
>
>
> > Thanks for the prompt reply.
> >
> > Yes, it does. All of them are up, with the correct class that is used by
> > the crush algorithm.
> >
> > On Thu, Feb 13, 2025 at 7:47 AM Marc <Marc@xxxxxxxxxxxxxxxxx
> > <mailto:Marc@xxxxxxxxxxxxxxxxx> > wrote:
> >
> >
> >       > Hello guys,
> >       > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
> > single
> >       > pool that consumes all OSDs of all nodes. After adding another
> > host, I
> >       > noticed that no extra space was added. Can this be a result of
> > the
> >       > number
> >       > of PGs I am using?
> >       >
> >       > I mean, when adding more hosts/OSDs, should I always consider
> > increasing
> >       > the number of PGs from a pool?
> >       >
> >
> >       ceph osd tree
> >
> >       shows all up and with correct weight?
> >
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux