HI Joao, You can see how much RocksDB space has been used with this command “ceph daemon osd.X perf dump” Where X is an OSD id on the node you are running the command on. You are looking for this section in the output :- "bluefs": { "gift_bytes": 0, "reclaim_bytes": 0, "db_total_bytes": 23966253056, "db_used_bytes": 1714421760, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 0, "slow_used_bytes": 0, "num_files": 24, "log_bytes": 552120320, "log_compactions": 0, "logged_bytes": 537051136, "files_written_wal": 1, "files_written_sst": 11, "bytes_written_wal": 429315193, "bytes_written_sst": 601384180, "bytes_written_slow": 0, "max_bytes_wal": 0, "max_bytes_db": 1714421760, "max_bytes_slow": 0 }, If you have numbers in the slow_ entries then your RocksDB is spilling over onto the HDD. As to if moving RocksDb and WAL on HDD can cause a performance degradation then it depends how busy your disks are. If you HDD’s are working hard and you are now going to throw a lot more workload
onto them then performance will degrade. Could be substantially. I have seen performance impacts of upto 75% when things have started spilling over from NVME to HDD. By that I mean I had a lovely flat line ingesting objects and that line suddenly dropped by 75% once the RocksDB had filled up and spilt over onto the HDD. From: João Victor Rodrigues Soares <jvrs2683@xxxxxxxxx> Hello, Att. João Victor Rodrigues Soares |
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx