Hello. I'm trying to fix a wrong cluster deployment (Nautilus 14.2.16) Cluster usage is %40 EC pool with RGW Every node has: 20 x OSD = TOSHIBA MG08SCA16TEY 16.0TB 2 x DB = NVME PM1725b 1.6TB (linux mdadm raid1) NVME usage always goes around %90-99. With "iostat -xdh 1" r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util Device 168.00 3619.00 7.2M 367.7M 0.00 90510.00 0.0% 96.2% 1.10 9.21 22.86 43.8k 104.0k 0.25 96.0% nvme0c0n1 19.00 3670.00 1.7M 373.5M 0.00 90510.00 0.0% 96.1% 0.26 29.61 95.99 89.7k 104.2k 0.27 98.0% nvme1c1n1 The problem is: BLUEFS_SPILLOVER BlueFS spillover detected on 120 OSD(s) osd.194 spilled over 42 GiB metadata from 'db' device (39 GiB used of 50 GiB) to slow device osd.195 spilled over 34 GiB metadata from 'db' device (40 GiB used of 50 GiB) to slow device osd.196 spilled over 28 GiB metadata from 'db' device (40 GiB used of 50 GiB) to slow device osd.197 spilled over 25 GiB metadata from 'db' device (41 GiB used of 50 GiB) to slow device osd.198 spilled over 30 GiB metadata from 'db' device (41 GiB used of 50 GiB) to slow device Block and wal size: bluestore_block_db_size = 53687091200 bluestore_block_wal_size = 0 nvme0n1 259:2 0 1.5T 0 disk └─md0 9:0 0 1.5T 0 raid1 ├─md0p1 259:4 0 50G 0 md ├─md0p2 259:5 0 50G 0 md +n └─md0p20 259:22 0 50G 0 md How can I change the level up to 500MB --> 5GB --> 50GB ? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx