Re: Recently deployed cluster showing 9Tb of raw usage without any load deployed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do you have standalone DB volumes for your OSD?

If so then highly likely RAW usage is that high due to DB volumes space is considered as in-use one already.

Could you please share "ceph osd df tree" output to prove that?


Thanks,

Igor

On 4/4/2023 4:25 AM, Work Ceph wrote:
Hello guys!


We noticed an unexpected situation. In a recently deployed Ceph cluster we
are seeing a raw usage, that is a bit odd. We have the following setup:


We have a new cluster with 5 nodes with the following setup:

    - 128 GB of RAM
    - 2 cpus Intel(R) Intel Xeon Silver 4210R
    - 1 NVME of 2 TB for the rocks DB caching
    - 5 HDDs of 14TB
    - 1 NIC dual port of 25GiB in BOND mode.


Right after deploying the Ceph cluster, we see a raw usage of about 9TiB.
However, no load has been applied onto the cluster. Have you guys seen such
a situation? Or, can you guys help understand it?


We are using Ceph Octopus, and we have set the following configurations:

```

ceph_conf_overrides:

   global:

     osd pool default size: 3

     osd pool default min size: 1

     osd pool default pg autoscale mode: "warn"

     perf: true

     rocksdb perf: true

   mon:

     mon osd down out interval: 120

   osd:

     bluestore min alloc size hdd: 65536


```


Any tip or help on how to explain this situation is welcome!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux