Re: Ceph Stretch Cluster - df pool size (Max Avail)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



https://tracker.ceph.com/issues/56650

There's a PR in progress to resolve this issue now. (Thanks, Prashant!)
-Greg

On Thu, Jul 28, 2022 at 7:52 AM Nicolas FONTAINE <n.fontaine@xxxxxxx> wrote:
>
> Hello,
>
> We have exactly the same problem. Did you find an answer or should we
> open a bug report?
>
> Sincerely,
>
> Nicolas.
>
> Le 23/06/2022 à 11:42, Kilian Ries a écrit :
> > Hi Joachim,
> >
> >
> > yes i assigned the stretch rule to the pool (4x replica / 2x min). The rule says that two replicas should be in every datacenter.
> >
> >
> > $ ceph osd tree
> > ID   CLASS  WEIGHT    TYPE NAME                   STATUS  REWEIGHT  PRI-AFF
> >   -1         62.87799  root default
> > -17         31.43900      datacenter site1
> > -15         31.43900          rack b7
> >   -3         10.48000              host host01
> >    0    ssd   1.74699                  osd.0           up   1.00000  1.00000
> >    1    ssd   1.74699                  osd.1           up   1.00000  1.00000
> >    2    ssd   1.74699                  osd.2           up   1.00000  1.00000
> >    3    ssd   1.74699                  osd.3           up   1.00000  1.00000
> >    4    ssd   1.74699                  osd.4           up   1.00000  1.00000
> >    5    ssd   1.74699                  osd.5           up   1.00000  1.00000
> >   -5         10.48000              host host02
> >    6    ssd   1.74699                  osd.6           up   1.00000  1.00000
> >    7    ssd   1.74699                  osd.7           up   1.00000  1.00000
> >    8    ssd   1.74699                  osd.8           up   1.00000  1.00000
> >    9    ssd   1.74699                  osd.9           up   1.00000  1.00000
> >   10    ssd   1.74699                  osd.10          up   1.00000  1.00000
> >   11    ssd   1.74699                  osd.11          up   1.00000  1.00000
> >   -7         10.48000              host host03
> >   12    ssd   1.74699                  osd.12          up   1.00000  1.00000
> >   13    ssd   1.74699                  osd.13          up   1.00000  1.00000
> >   14    ssd   1.74699                  osd.14          up   1.00000  1.00000
> >   15    ssd   1.74699                  osd.15          up   1.00000  1.00000
> >   16    ssd   1.74699                  osd.16          up   1.00000  1.00000
> >   17    ssd   1.74699                  osd.17          up   1.00000  1.00000
> > -18         31.43900      datacenter site2
> > -16         31.43900          rack h2
> >   -9         10.48000              host host04
> >   18    ssd   1.74699                  osd.18          up   1.00000  1.00000
> >   19    ssd   1.74699                  osd.19          up   1.00000  1.00000
> >   20    ssd   1.74699                  osd.20          up   1.00000  1.00000
> >   21    ssd   1.74699                  osd.21          up   1.00000  1.00000
> >   22    ssd   1.74699                  osd.22          up   1.00000  1.00000
> >   23    ssd   1.74699                  osd.23          up   1.00000  1.00000
> > -11         10.48000              host host05
> >   24    ssd   1.74699                  osd.24          up   1.00000  1.00000
> >   25    ssd   1.74699                  osd.25          up   1.00000  1.00000
> >   26    ssd   1.74699                  osd.26          up   1.00000  1.00000
> >   27    ssd   1.74699                  osd.27          up   1.00000  1.00000
> >   28    ssd   1.74699                  osd.28          up   1.00000  1.00000
> >   29    ssd   1.74699                  osd.29          up   1.00000  1.00000
> > -13         10.48000              host host06
> >   30    ssd   1.74699                  osd.30          up   1.00000  1.00000
> >   31    ssd   1.74699                  osd.31          up   1.00000  1.00000
> >   32    ssd   1.74699                  osd.32          up   1.00000  1.00000
> >   33    ssd   1.74699                  osd.33          up   1.00000  1.00000
> >   34    ssd   1.74699                  osd.34          up   1.00000  1.00000
> >   35    ssd   1.74699                  osd.35          up   1.00000  1.00000
> >
> >
> > So regarding my calculation it should be
> >
> >
> > (6x Nodes * 6x SSD * 1,8TB) / 4 = 16 TB
> >
> >
> > Is this maybe a bug in the stretch mode that i only get displayed half the size available?
> >
> >
> > Regards,
> >
> > Kilian
> >
> >
> > ________________________________
> > Von: Clyso GmbH - Ceph Foundation Member <joachim.kraftmayer(a)clyso.com>
> > Gesendet: Mittwoch, 22. Juni 2022 18:20:59
> > An: Kilian Ries; ceph-users(a)ceph.io
> > Betreff: Re:  Ceph Stretch Cluster - df pool size (Max Avail)
> >
> > Hi Kilian,
> >
> > we do not currently use this mode of ceph clustering. but normally you
> > need to assign the crush rule to the pool as well, otherwise it will
> > take rule 0 as default.
> >
> > the following calculation for rule 0 would also work approximately:
> >
> > (3 Nodes *6 x SSD *1,8TB)/4 = 8,1 TB
> >
> > hope it helps, Joachim
> >
> >
> > ___________________________________
> > Clyso GmbH - Ceph Foundation Member
> >
> > Am 22.06.22 um 18:09 schrieb Kilian Ries:
> >> Hi,
> >>
> >>
> >> i'm running a ceph stretch cluster with two datacenters. Each of the datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor is deployed as arbiter node in a third datacenter.
> >>
> >>
> >> Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of about 63 TB storage (6x nodes * 6x SSD * 1,8TB = 63TB)c.
> >>
> >>
> >> In stretch mode my pool is configured with replication 4x - and as far as i unterstand this should give me a max pool storage size of ~15TB (63TB / 4 = 15,75TB. But if i run "ceph df" it shows me only the half, about ~7,5TB.
> >>
> >>
> >>
> >> $ ceph df
> >>
> >> --- RAW STORAGE ---
> >>
> >> CLASS    SIZE   AVAIL    USED  RAW USED  %RAW USED
> >>
> >> ssd    63 TiB  63 TiB  35 GiB    35 GiB       0.05
> >>
> >> TOTAL  63 TiB  63 TiB  35 GiB    35 GiB       0.05
> >>
> >>
> >>
> >> --- POOLS ---
> >>
> >> POOL                   ID  PGS   STORED  OBJECTS    USED  %USED  MAX AVAIL
> >>
> >> device_health_metrics   1    4  4.4 MiB       36  17 MiB      0    7.5 TiB
> >>
> >> vm_stretch              2   64  8.2 GiB    2.19k  33 GiB   0.11    7.5 TiB
> >>
> >>
> >>
> >> My replication rule is from the ceph documentation:
> >>
> >>
> >> https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
> >>
> >>
> >> rule stretch_rule {
> >>
> >>           id 1
> >>
> >>           min_size 1
> >>
> >>           max_size 10
> >>
> >>           type replicated
> >>
> >>           step take site1
> >>
> >>           step chooseleaf firstn 2 type host
> >>
> >>           step emit
> >>
> >>           step take site2
> >>
> >>           step chooseleaf firstn 2 type host
> >>
> >>           step emit
> >>
> >> }
> >>
> >>
> >>
> >> Any idea why ceph shows me only about half the size i should be able to use (with 4x replicaton on the pool) ?
> >>
> >>
> >> Thanks,
> >>
> >> Regards
> >>
> >>
> >> Kilian
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users(a)ceph.io
> >> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux