Copying out bluestore's rocksdb, compact, then put back in - Mimic 13.2.6/13.2.8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a weird situation where an OSD's rocksdb fails to compact, because the OSD became full and the osd-full-ratio was 1.0 (not a good idea, I know).

Hitting "bluefs enospc" while compacting:
   -376> 2019-12-18 15:48:16.492 7f2e0a5ac700  1 bluefs _allocate failed to allocate 0x40da486 on bdev 1, free 0x38b0000; fallback to bdev 2
  -376> 2019-12-18 15:48:16.492 7f2e0a5ac700 -1 bluefs _allocate failed to allocate 0x on bdev 2, dne
  -376> 2019-12-18 15:48:16.492 7f2e0a5ac700 -1 bluefs _flush_range allocated: 0x0 offset: 0x0 length: 0x40da486
  -376> 2019-12-18 15:48:16.500 7f2e0a5ac700 -1 /build/ceph-13.2.8/src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uin
t64_t, uint64_t)' thread 7f2e0a5ac700 time 2019-12-18 15:48:16.499599
/build/ceph-13.2.8/src/os/bluestore/BlueFS.cc: 1704: FAILED assert(0 == "bluefs enospc")

So my idea is to copy out rocksdb somewhere else (bluefs-export), compact, then copy back in. Is there a way to do this? Mounting bluefs seems to be part of the OSD code, so there's no easy way to do this it seems.

Because the OSD died at 100% full, I can't do bluefs-bdev-expand, and repair/fsck fail too.

Thanks in advance.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux