Re: Fixing BlueFS spillover (pacific 16.2.14)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Chris, Igor,

I came here to say two things.

Firstly, thank you for this thread. I've not run perf dump or bluefs stats before and found it helpful in diagnosing the same problem you had.

Secondly, yes 'ceph-volume lvm migrate' was effective (in Quincy 17.2.7) to finalise the migration of RocksDB data like this. We discovered an OSD without a flash RocksDB, and used 'ceph-volume lvm new-db' to set it up. That resulted in the spillover warning and led me here. Fixed now.

Regards,
Greg.



On 18/10/23 08:20, Chris Dunlop wrote:
Hi Igor,

Thanks for the suggestions. You may have already seen my followup message where the solution was to use "ceph-bluestore-tool bluefs-bdev-migrate" to get the lingering 128KiB of data moved from the slow to the fast device. I wonder if your suggested "ceph-volume lvm migrate" would do the same.

Notably, DB compaction didn't help in my case.

Cheers,

Chris

On Mon, Oct 16, 2023 at 03:46:17PM +0300, Igor Fedotov wrote:
Hi Chris,

for the first question (osd.76) you might want to try ceph-volume's "lvm migrate --from data --target <db lvm>" command. Looks like some persistent DB remnants are still kept at main device causing the alert.

W.r.t osd.86's question - the line "SLOW        0 B         3.0 GiB     59 GiB" means that RocksDB higher levels  data (usually L3+) are spread over DB and main (aka slow) devices as 3 GB and 59 GB respectively.

In other words SLOW row refers to DB data which is originally supposed to be at SLOW device (due to RocksDB data mapping mechanics). But improved bluefs logic (introduced by https://github.com/ceph/ceph/pull/29687) permitted extra DB disk usage for a part of this data.

Resizing DB volume and following DB compaction should do the trick and move all the data to DB device. Alternatively ceph-volume's lvm migrate command should do the same but the result will be rather temporary without DB volume resizing.

Hope this helps.


Thanks,

Igor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux