Re: Ceph not showing full capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jane,

I agree with you and I was trying to say disk which has more PG will fill
up quick.

But, My question even though RAW disk space is 262 TB, pool 2 replica max
storage is showing only 132 TB in the dashboard and when mounting the pool
using cephfs it's showing 62 TB, I could understand that due to replica
it's showing half of the space.

why it's not showing the entire RAW disk space as available space?
Number of PG per pool play any vital role in showing available space?

On Mon, Oct 26, 2020 at 12:37 PM Janne Johansson <icepic.dz@xxxxxxxxx>
wrote:

>
>
> Den sön 25 okt. 2020 kl 15:18 skrev Amudhan P <amudhan83@xxxxxxxxx>:
>
>> Hi,
>>
>> For my quick understanding How PG's are responsible for allowing space
>> allocation to a pool?
>>
>
> An objects name will decide which PG (from the list of PGs in the pool) it
> will end
> up on, so if you have very few PGs, the hashed/pseudorandom placement will
> be unbalanced at times. As an example, if you have only 8 PGs and write
> 9 large objects, then at least one (but probably two or three) PGs will
> receive two
> or more of those 9, and some will receive none just on pure statistics.
> If you have 100 PGs, the chance of one getting two out of those nine
> objects
> is much smaller. Overall, with all pools accounted for, one should aim for
> something
> like 100 PGs per OSD, but you also need to count the replication factor
> for each pool
> so if you have replication = 3 and a pool gets 128 PGs, it will place
> 3*128 PGs
> out on various OSDs according to the crush rules.
>
> PGs don't have a size, but will grow as needed, and since the next object
> to
> be written can end up anywhere (depending on the hashed result) ceph df
> must
> always tell you the worst case when listing how much data this pool has
> "left".
> It will always be the OSD with least space left that limits the pool.
>
>
>> My understanding that PG's basically helps in object placement when the
>> number of PG's for a OSD's is high there is a high possibility that PG
>> gets
>> lot more data than other PG's.
>
>
> This statement seems incorrect to me.
>
>
>> At this situation, we can use the balance
>> between OSD's.
>> But, I can't understand the logic of how does it restrict space to a pool?
>>
>
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux