Re: ceph df space usage confusion - balancing needed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, if you have uneven sizes I guess you could end up in a situation
where you have
lots of 1TB OSDs and a number of 2TB OSD but pool replication forces
the pool to have one
PG replica on the 1TB OSD, then it would be possible to state "this
pool cant write more than X G"
but when it is full, there would be free space left on some of the
2TB-OSDs, but which the pool
cant utilize. Probably same for uneven OSD hosts if you have those.

Den lör 20 okt. 2018 kl 20:28 skrev Oliver Freyermuth
<freyermuth@xxxxxxxxxxxxxxxxxx>:
>
> Dear Janne,
>
> yes, of course. But since we only have two pools here, this can not explain the difference.
> The metadata is replicated (3 copies) across ssd drives, and we have < 3 TB of total raw storage for that.
> So looking at the raw space usage, we can ignore that.
>
> All the rest is used for the ceph_data pool. So the ceph_data pool, in terms of raw storage, is about 50 % used.
>
> But in terms of storage shown for that pool, it's almost 63 % %USED.
> So I guess this can purely be from bad balancing, correct?
>
> Cheers,
>         Oliver
>
> Am 20.10.18 um 19:49 schrieb Janne Johansson:
> > Do mind that drives may have more than one pool on them, so RAW space
> > is what it says, how much free space there is. Then the avail and
> > %USED on per-pool stats will take replication into account, it can
> > tell how much data you may write into that particular pool, given that
> > pools replication or EC settings.
> >
> > Den lör 20 okt. 2018 kl 19:09 skrev Oliver Freyermuth
> > <freyermuth@xxxxxxxxxxxxxxxxxx>:
> >>
> >> Dear Cephalopodians,
> >>
> >> as many others, I'm also a bit confused by "ceph df" output
> >> in a pretty straightforward configuration.
> >>
> >> We have a CephFS (12.2.7) running, with 4+2 EC profile.
> >>
> >> I get:
> >> ----------------------------------------------------------------------------
> >> # ceph df
> >> GLOBAL:
> >>     SIZE     AVAIL     RAW USED     %RAW USED
> >>     824T      410T         414T         50.26
> >> POOLS:
> >>     NAME                ID     USED     %USED     MAX AVAIL     OBJECTS
> >>     cephfs_metadata     1      452M      0.05          860G       365774
> >>     cephfs_data         2      275T     62.68          164T     75056403
> >> ----------------------------------------------------------------------------
> >>
> >> So about 50 % of raw space are used, but already ~63 % of filesystem space are used.
> >> Is this purely from imperfect balancing?
> >> In "ceph osd df", I do indeed see OSD usages spreading from 65.02 % usage down to 37.12 %.
> >>
> >> We did not yet use the balancer plugin.
> >> We don't have any pre-luminous clients.
> >> In that setup, I take it that "upmap" mode would be recommended - correct?
> >> Any "gotchas" using that on luminous?
> >>
> >> Cheers,
> >>         Oliver
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
>
>


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux