Re: Ceph not showing full capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

For my quick understanding How PG's are responsible for allowing space
allocation to a pool?

My understanding that PG's basically helps in object placement when the
number of PG's for a OSD's is high there is a high possibility that PG gets
lot more data than other PG's. At this situation, we can use the balance
between OSD's.

But, I can't understand the logic of how does it restrict space to a pool?


On Sun, Oct 25, 2020 at 5:55 PM 胡 玮文 <huww98@xxxxxxxxxxx> wrote:

> Hi,
>
> In ceph, when you create an object, it cannot go any OSD as it fits. An
> object is mapped to a placement group using a hash algorithm. Then
> placement groups are mapped to OSDs. See [1] for details. So, if any of
> your OSD goes full, write operations cannot be guaranteed success. Once you
> correct the unbalance, you should see more available space.
>
> Also, you only have 289 placement groups, which I think is too few for
> your 48 OSDs [2]. If you have more placement groups, the unbalance issue
> will be far less severe.
>
> [1]: https://docs.ceph.com/en/latest/architecture/#mapping-pgs-to-osds
> [2]: https://docs.ceph.com/en/latest/rados/operations/placement-groups/
>
> > 在 2020年10月25日,18:24,Amudhan P <amudhan83@xxxxxxxxx> 写道:
> >
> > Hi Stefan,
> >
> > I have started balancer but what I don't understand is there are enough
> > free space in other disks.
> >
> > Why it's not showing those in available space?
> > How to reclaim the free space?
> >
> >> On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman, <stefan@xxxxxx> wrote:
> >>> On 2020-10-25 05:33, Amudhan P wrote:
> >>> Yes, There is a unbalance in PG's assigned to OSD's.
> >>> `ceph osd df` output snip
> >>> ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META
> >>> AVAIL    %USE   VAR   PGS  STATUS
> >>> 0    hdd  5.45799   1.00000  5.5 TiB  3.6 TiB  3.6 TiB  9.7 MiB   4.6
> >>> GiB  1.9 TiB  65.94  1.31   13      up
> >>> 1    hdd  5.45799   1.00000  5.5 TiB  1.0 TiB  1.0 TiB  4.4 MiB   1.3
> >>> GiB  4.4 TiB  18.87  0.38    9      up
> >>> 2    hdd  5.45799   1.00000  5.5 TiB  1.5 TiB  1.5 TiB  4.0 MiB   1.9
> >>> GiB  3.9 TiB  28.30  0.56   10      up
> >>> 3    hdd  5.45799   1.00000  5.5 TiB  2.1 TiB  2.1 TiB  7.7 MiB   2.7
> >>> GiB  3.4 TiB  37.70  0.75   12      up
> >>> 4    hdd  5.45799   1.00000  5.5 TiB  4.1 TiB  4.1 TiB  5.8 MiB   5.2
> >>> GiB  1.3 TiB  75.27  1.50   20      up
> >>> 5    hdd  5.45799   1.00000  5.5 TiB  5.1 TiB  5.1 TiB  5.9 MiB   6.7
> >>> GiB  317 GiB  94.32  1.88   18      up
> >>> 6    hdd  5.45799   1.00000  5.5 TiB  1.5 TiB  1.5 TiB  5.2 MiB   2.0
> >>> GiB  3.9 TiB  28.32  0.56    9      up
> >>> MIN/MAX VAR: 0.19/1.88  STDDEV: 22.13
> >> ceph balancer mode upmap
> >> ceph balancer on
> >> The balancer should start balancing and this should result in way more
> >> space available. Good to know that ceph df is based on the disk that is
> >> most full.
> >> There is all sorts of tuning available for the balancer, although I
> >> can't find it in the documentation. Ceph docu better project is working
> >> on that. See [1] for information. You can look up the python code to see
> >> what variables you can tune: /usr/share/ceph/mgr/balancer/module.py
> >> ceph config set mgr/balancer/begin_weekday 1
> >> ceph config set mgr/balancer/end_weekday 5
> >> ceph config set mgr mgr/balancer/begin_time 1000
> >> ceph config set mgr mgr/balancer/end_time 1700
> >> ^^ to restrict the balancer running only on weekdays (monday to friday)
> >> from 10:00 - 17:00 h.
> >> Gr. Stefan
> >> [1]:
> https://eur05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph.com%2Fen%2Flatest%2Frados%2Foperations%2Fbalancer%2F%23balancer&amp;data=04%7C01%7C%7C3d442b418e2c4fa062a508d878d017d2%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637392182428809986%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=08TFogVHZLrAiSRlU5M%2F0vkTiP0fX9I9gjgQRYNc%2Fh4%3D&amp;reserved=0
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux