Re: Fixing BlueFS spillover (pacific 16.2.14)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Igor,

Thanks for the suggestions. You may have already seen my followup message where the solution was to use "ceph-bluestore-tool bluefs-bdev-migrate" to get the lingering 128KiB of data moved from the slow to the fast device. I wonder if your suggested "ceph-volume lvm migrate" would do the same.

Notably, DB compaction didn't help in my case.

Cheers,

Chris

On Mon, Oct 16, 2023 at 03:46:17PM +0300, Igor Fedotov wrote:
Hi Chris,

for the first question (osd.76) you might want to try ceph-volume's "lvm migrate --from data --target <db lvm>" command. Looks like some persistent DB remnants are still kept at main device causing the alert.

W.r.t osd.86's question - the line "SLOW        0 B         3.0 GiB     59 GiB" means that RocksDB higher levels  data (usually L3+) are spread over DB and main (aka slow) devices as 3 GB and 59 GB respectively.

In other words SLOW row refers to DB data which is originally supposed to be at SLOW device (due to RocksDB data mapping mechanics). But improved bluefs logic (introduced by https://github.com/ceph/ceph/pull/29687) permitted extra DB disk usage for a part of this data.

Resizing DB volume and following DB compaction should do the trick and move all the data to DB device. Alternatively ceph-volume's lvm migrate command should do the same but the result will be rather temporary without DB volume resizing.

Hope this helps.


Thanks,

Igor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux