Hello.
In my lab a nautilus cluster with a bluestore suddenly went dark. As I
found it had used 98% of the space and most of OSDs (small, 10G each)
went offline. Any attempt to restart them failed with this message:
# /usr/bin/ceph-osd -f --cluster ceph --id 18 --setuser ceph --setgroup
ceph
2019-10-31 09:44:37.591 7f73d54b3f80 -1 osd.18 271 log_to_monitors
{default=true}
2019-10-31 09:44:37.615 7f73bff99700 -1
bluestore(/var/lib/ceph/osd/ceph-18) _do_alloc_write failed to allocate
0x10000 allocated 0x ffffffffffffffe4 min_alloc_size 0x10000 available 0x 0
2019-10-31 09:44:37.615 7f73bff99700 -1
bluestore(/var/lib/ceph/osd/ceph-18) _do_write _do_alloc_write failed
with (28) No space left on device
2019-10-31 09:44:37.615 7f73bff99700 -1
bluestore(/var/lib/ceph/osd/ceph-18) _txc_add_transaction error (28) No
space left on device not handled on operation 10 (op 30, counting from 0)
2019-10-31 09:44:37.615 7f73bff99700 -1
bluestore(/var/lib/ceph/osd/ceph-18) ENOSPC from bluestore,
misconfigured cluster
/build/ceph-14.2.4/src/os/bluestore/BlueStore.cc: In function 'void
BlueStore::_txc_add_transaction(BlueStore::TransContext*,
ObjectStore::Transaction*)' thread 7f73bff99700 time 2019-10-31
09:44:37.620694
/build/ceph-14.2.4/src/os/bluestore/BlueStore.cc: 11455:
ceph_abort_msg("unexpected error")
I was able to recover cluster by adding some more space into VGs for
some of OSDs and using this command:
ceph-bluestore-tool --log-level 30 --path /var/lib/ceph/osd/ceph-xx
--command bluefs-bdev-expand
It worked but only because I added some space into OSD.
I'm curious, is there a way to recover such OSD without growing it? On
the old filestore I can just remove some objects to gain space, is this
possible for bluestore? My main concern is that OSD daemon simply
crashes at start, so I can't just add 'more OSD' to cluster - all data
become unavailable, because OSDs are completely dead.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com