Fwd: BlueFS assertion ceph_assert(h->file->fnode.ino != 1)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I wonder if someone else faced the issue described on the tracker: https://tracker.ceph.com/issues/45519
 
We thought that this problem is caused by high OSD fragmentation, until today. For now even OSDs with fragmentation rating < .3 are affected. We don't use separate DB/WAL partition in this setup and strings like this before failing:
2020-07-25 11:08:22.961 7f6f489d5700  1 bluefs _allocate failed to allocate 0x33dd4c5 on bdev 1, free 0x2bc0000; fallback to bdev 2
2020-07-25 11:08:22.961 7f6f489d5700  1 bluefs _allocate unable to allocate 0x33dd4c5 on bdev 2, free 0xffffffffffffffff; fallback to slow device expander
look suspicious for us.
 
We use 4KiB bluefs and bluestore block sizes as well as store the objects ~1KiB size and it looks like this makes the issue to be reproduced much more frequently. But, as I can see on the tracker / telegram channels, different people face with it from time to time, for example: https://paste.ubuntu.com/p/GDCXDrnrtX/ (telegram link https://t.me/ceph_users/376)
 
Did anyone able to identify the root cause and/or find a workaround for it?
 
BTW, ceph would be a nice small-objects storage showing 300-500usec latency if not this issue and this: https://tracker.ceph.com/issues/45765 one.
-- 
Regards,
Aleksei Zakharov
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux