Recently deployed cluster showing 9Tb of raw usage without any load deployed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello guys!


We noticed an unexpected situation. In a recently deployed Ceph cluster we
are seeing a raw usage, that is a bit odd. We have the following setup:


We have a new cluster with 5 nodes with the following setup:

   - 128 GB of RAM
   - 2 cpus Intel(R) Intel Xeon Silver 4210R
   - 1 NVME of 2 TB for the rocks DB caching
   - 5 HDDs of 14TB
   - 1 NIC dual port of 25GiB in BOND mode.


Right after deploying the Ceph cluster, we see a raw usage of about 9TiB.
However, no load has been applied onto the cluster. Have you guys seen such
a situation? Or, can you guys help understand it?


We are using Ceph Octopus, and we have set the following configurations:

```

ceph_conf_overrides:

  global:

    osd pool default size: 3

    osd pool default min size: 1

    osd pool default pg autoscale mode: "warn"

    perf: true

    rocksdb perf: true

  mon:

    mon osd down out interval: 120

  osd:

    bluestore min alloc size hdd: 65536


```


Any tip or help on how to explain this situation is welcome!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux