Thanks Igor, indeed it does match !
# cat ceph-osd.0.log |grep wal 2018-08-16 11:55:27.181950 7fa47c106e00 4 rocksdb: Options.wal_dir: db 2018-08-16 11:55:27.181983 7fa47c106e00 4 rocksdb: Options.wal_bytes_per_sync: 0 2018-08-16 11:55:27.181984 7fa47c106e00 4 rocksdb: Options.wal_recovery_mode: 0 2018-08-16 11:55:27.182000 7fa47c106e00 4 rocksdb: Options.wal_filter: None 2018-08-16 11:55:27.182011 7fa47c106e00 4 rocksdb: Options.max_total_wal_size: 0 # ceph daemon osd.0 perf dump ... "bluefs": { "gift_bytes": 0, "reclaim_bytes": 0, "db_total_bytes": 80015777792, "db_used_bytes": 15728640, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 71988936704, "slow_used_bytes": 0, "num_files": 12, "log_bytes": 491520, "log_compactions": 0, "logged_bytes": 114688, "files_written_wal": 1, "files_written_sst": 1, "bytes_written_wal": 377013, "bytes_written_sst": 4842 }, ... Just an additional question, is it normal that on the osd log, I see that max_total_wal_size is setted to 0 ? I used the ceph default values at this time : # ceph-conf --show-config | grep wal bluefs_preextend_wal_files = false bluestore_block_wal_create = false bluestore_block_wal_path = bluestore_block_wal_size = 100663296 rocksdb_separate_wal_dir = false Regards, Hervé Le 16/08/2018 à 16:05, Igor Fedotov a écrit :
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com