Re: Ceph not showing full capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den sön 25 okt. 2020 kl 15:18 skrev Amudhan P <amudhan83@xxxxxxxxx>:

> Hi,
>
> For my quick understanding How PG's are responsible for allowing space
> allocation to a pool?
>

An objects name will decide which PG (from the list of PGs in the pool) it
will end
up on, so if you have very few PGs, the hashed/pseudorandom placement will
be unbalanced at times. As an example, if you have only 8 PGs and write
9 large objects, then at least one (but probably two or three) PGs will
receive two
or more of those 9, and some will receive none just on pure statistics.
If you have 100 PGs, the chance of one getting two out of those nine objects
is much smaller. Overall, with all pools accounted for, one should aim for
something
like 100 PGs per OSD, but you also need to count the replication factor for
each pool
so if you have replication = 3 and a pool gets 128 PGs, it will place 3*128
PGs
out on various OSDs according to the crush rules.

PGs don't have a size, but will grow as needed, and since the next object to
be written can end up anywhere (depending on the hashed result) ceph df must
always tell you the worst case when listing how much data this pool has
"left".
It will always be the OSD with least space left that limits the pool.


> My understanding that PG's basically helps in object placement when the
> number of PG's for a OSD's is high there is a high possibility that PG gets
> lot more data than other PG's.


This statement seems incorrect to me.


> At this situation, we can use the balance
> between OSD's.
> But, I can't understand the logic of how does it restrict space to a pool?
>


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux