Re: Recently deployed cluster showing 9Tb of raw usage without any load deployed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Originally you mentioned 14TB HDDs not 15TB. Could this be a trick?

If not - please share "ceph osd df tree" output?


On 4/4/2023 2:18 PM, Work Ceph wrote:
Thank you guys for your replies. The "used space" there is exactly that. It is the accounting for Rocks.DB and WAL.
```
RAW USED: The sum of USED space and the space allocated the db and wal BlueStore partitions.
```

There is one detail I do not understand. We are off-loading WAL and RocksDB to an NVME device; however, Ceph still seems to think that we use our data plane disks to store those elements. We have about 375TB (5 * 5 * 15) in HDD disks, and Ceph seems to be discounting from the usable space the volume (space) dedicated to WAL and Rocks.DB, which are applied into different disks; therefore, it shows as usable space 364 TB (after removing the space dedicated to WAL and Rocks.DB, which are in another device). Is that a bug of some sort?


On Tue, Apr 4, 2023 at 6:31 AM Igor Fedotov <igor.fedotov@xxxxxxxx> wrote:

    Please also note that total cluster size reported below as SIZE
    apparently includes DB volumes:

    # ceph df
    --- RAW STORAGE ---
    CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
    hdd    373 TiB  364 TiB  9.3 TiB   9.3 TiB       2.50

    On 4/4/2023 12:22 PM, Igor Fedotov wrote:
    > Do you have standalone DB volumes for your OSD?
    >
    > If so then highly likely RAW usage is that high due to DB volumes
    > space is considered as in-use one already.
    >
    > Could you please share "ceph osd df tree" output to prove that?
    >
    >
    > Thanks,
    >
    > Igor
    >
    > On 4/4/2023 4:25 AM, Work Ceph wrote:
    >> Hello guys!
    >>
    >>
    >> We noticed an unexpected situation. In a recently deployed Ceph
    >> cluster we
    >> are seeing a raw usage, that is a bit odd. We have the
    following setup:
    >>
    >>
    >> We have a new cluster with 5 nodes with the following setup:
    >>
    >>     - 128 GB of RAM
    >>     - 2 cpus Intel(R) Intel Xeon Silver 4210R
    >>     - 1 NVME of 2 TB for the rocks DB caching
    >>     - 5 HDDs of 14TB
    >>     - 1 NIC dual port of 25GiB in BOND mode.
    >>
    >>
    >> Right after deploying the Ceph cluster, we see a raw usage of
    about
    >> 9TiB.
    >> However, no load has been applied onto the cluster. Have you guys
    >> seen such
    >> a situation? Or, can you guys help understand it?
    >>
    >>
    >> We are using Ceph Octopus, and we have set the following
    configurations:
    >>
    >> ```
    >>
    >> ceph_conf_overrides:
    >>
    >>    global:
    >>
    >>      osd pool default size: 3
    >>
    >>      osd pool default min size: 1
    >>
    >>      osd pool default pg autoscale mode: "warn"
    >>
    >>      perf: true
    >>
    >>      rocksdb perf: true
    >>
    >>    mon:
    >>
    >>      mon osd down out interval: 120
    >>
    >>    osd:
    >>
    >>      bluestore min alloc size hdd: 65536
    >>
    >>
    >> ```
    >>
    >>
    >> Any tip or help on how to explain this situation is welcome!
    >> _______________________________________________
    >> ceph-users mailing list -- ceph-users@xxxxxxx
    >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
    >
-- Igor Fedotov
    Ceph Lead Developer

    Looking for help with your Ceph cluster? Contact us at
    https://croit.io

    croit GmbH, Freseniusstr. 31h, 81247 Munich
    CEO: Martin Verges - VAT-ID: DE310638492
    Com. register: Amtsgericht Munich HRB 231263
    Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us athttps://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:https://croit.io  | YouTube:https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux