Yes, I understand you but that time you wouldn't have issue because it can fit on the OSD or it would use the space on the osd which is ssd. Or I don't know then what's the secret of storing billions of objects, the osds are not even used 20%, if I calculate as of the current situation, even if I use 1x 2TB nvme for wal+db, if the 15.3TB osd will be at 50%, the nvme would be full again, so I should use for 1x 15.3 TB SSD drive 4TB nvme in front of each of them which doesn't makes sense I guess. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx --------------------------------------------------- -----Original Message----- From: Janne Johansson <icepic.dz@xxxxxxxxx> Sent: Tuesday, September 28, 2021 1:36 PM To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> Cc: Ceph Users <ceph-users@xxxxxxx> Subject: Re: Billions of objects upload with bluefs spillover cause osds down? Email received from the internet. If in doubt, don't click any link nor open any attachment ! ________________________________ Den tis 28 sep. 2021 kl 08:15 skrev Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>: > Regarding point 2, how can it spillover if I wouldn’t use db device just block. It can't but it ACTS like if you had 100% spillover. The act of spilling over is a symptom of db sharing device with data. If you have no dedicated device, then ALL db shares device with data. If you have a db device and it is too small for the whole DB, then PARTS will spill over and you get a notice so you can decide how to act. The spillover is not the problem, it is a symptom of "I have more DB than my db device can hold". No DB device means "immediately I get the same effect as 100% spillover". Think of it as a small glass of water spilling over when you over fill it. If the glass is small, lots of spillover. If the glass is missing, ALL water "spills over". -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx