Re: Does the number of PGs affect the total usable size of a pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
<work.ceph.user.mailing@xxxxxxxxx>:
> Thanks for the feedback!
> Yes, the Heath_ok is there.]
> The OSD status show all of them as "exists,up".
>
> The interesting part is that "ceph df" shows the correct values in the "RAW
> STORAGE" section. However, for the SSD pool I have, it shows only the
> previous value as the max usable value.
> I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
> with replica as 3. Therefore, I had about 128TiB as possible usable space
> for the pool before. Now that I added a new node, I would expect 480 RAW
> space, which is what I have in the RAW STORAGE section, but the usable
> space to be used in the pool has not changed. I would expect the usable
> space to grow at about 160TiB. I know that these limits will never be
> reached as we have locks in 85%-90% for each OSD.

Has all PGs moved yet? If not, then you have to wait until the old
OSDs have moved PGs over the the newly added ones.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux