Billions of objects upload with bluefs spillover cause osds down?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

One of our user is migrating 1.2 billions of objects to one bucket from another system (cassandra) and we are facing in our clusters bluefs spillovers on 50% of the osds. 
We have 600-900GB dbs bit seems like can’t fit.
Also the cluster is very unstable, I can’t really set recovery operations, backfills more than 1 because osds started to rebooting and it makes the recovery very slow.

1.
What did I miss during planning for this? The osds are 15.3 TB ssds.

2.
If I remove the db device from the nvme with the ceph-objectstore-tool and keep it with the block, would it be an issue still? I guess if stay together cannot spillover anywhere.
I guess need to compact the spilledover disks before remove db.

3.
Correct me if I’m wrong but the separate db device is just a help for the block to be able to find the files from which block it is located in the disk so if I remove, the osd will still know where is the data, but with looking on the block device itself not on the separated nvme.

Thank you

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux