Bareos needs to be re-written to use libradosstriper or it should internally shard the data across multiple objects. Objects shouldn’t be stored as large as that and performance will also suffer.
From: ceph-users [mailto:ceph-users-bounces@lis
ts.ceph.com ] On Behalf Of Alexander Kushnirenko
Sent: 26 September 2017 13:50
To: ceph-users@xxxxxxxxxxxxxx
Subject: osd crashes with large object size (>10GB) in luminos Rados
Hello,
We successfully use rados to store backup volumes in jewel version of CEPH. Typical volume size is 25-50GB. Backup software (bareos) use Rados objects as backup volumes and it works fine. Recently we tried luminous for the same purpose.
In luminous developers reduced osd_max_object_size from 100G to 128M. As I understood for the performance reasons. But it broke down interaction with bareos backup software. You can reverse osd_max_object_size to 100G, but then the OSD start to crash once you start to put objects of about 4GB in size (4,294,951,051).
Any suggestion how to approach this problem?
Alexander.
Sep 26 15:12:58 ceph02 ceph-osd[1417]: /build/ceph-12.2.0/src/os/blue
store/BlueStore.cc: In function 'void BlueStore::_txc_add_transactio n(BlueStore::TransContext*, ObjectStore::Transaction*)' thread 7f04ac2f9700 time 2017-09-26 15:12:58.230268 Sep 26 15:12:58 ceph02 ceph-osd[1417]: /build/ceph-12.2.0/src/os/blue
store/BlueStore.cc: 9282: FAILED assert(0 == "unexpected error") Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229837 7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ce
ph-0) _txc_add_transaction error (7) Argument list too long not handled on operation 10 (op 1, counting from 0) Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2017-09-26 15:12:58.229869 7f04ac2f9700 -1 bluestore(/var/lib/ceph/osd/ce
ph-0) unexpected error code Sep 26 15:12:58 ceph02 ceph-osd[1417]: ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb2
4d43bab910c) luminous (rc) Sep 26 15:12:58 ceph02 ceph-osd[1417]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x563c7b5f83a2]
Sep 26 15:12:58 ceph02 ceph-osd[1417]: 2: (BlueStore::_txc_add_transacti
on(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1 5fa) [0x563c7b4ac2ba] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 3: (BlueStore::queue_transactions
(ObjectStore::Sequencer*, std::vector<ObjectStore::Trans action, std::allocator<ObjectStore::Tr ansaction> >&, boost::intrusive_ptr<TrackedOp >, ThreadPool::TPHandle*)+0x536) [0x563c7b4ad916] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 4: (PrimaryLogPG::queue_transacti
ons(std::vector<ObjectStore:: Transaction, std::allocator<ObjectStore::Tr ansaction> >&, boost::intrusive_ptr<OpRequest >)+0x66) [0x563c7b1d17f6] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 5: (ReplicatedBackend::submit_tra
nsaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, std::default_delete<PGTransact ion> >&&, eversion_t const&, eversion_t const&, std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> > const&, boost::optional<pg_hit_set_his tory_t>&, Context*, Context*, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr<OpRequest >)+0xcbf) [0x563c7b30436f] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 6: (PrimaryLogPG::issue_repop(Pri
maryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0x9f a) [0x563c7b16d68a] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 7: (PrimaryLogPG::execute_ctx(Pri
maryLogPG::OpContext*)+0x131d) [0x563c7b1b7a5d] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 8: (PrimaryLogPG::do_op(boost::in
trusive_ptr<OpRequest>&)+0x2ec e) [0x563c7b1bb26e] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 9: (PrimaryLogPG::do_request(boos
t::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xea6) [0x563c7b175446] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 10: (OSD::dequeue_op(boost::intrus
ive_ptr<PG>, boost::intrusive_ptr<OpRequest >, ThreadPool::TPHandle&)+0x3ab) [0x563c7aff919b] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 11: (PGQueueable::RunVis::operator
()(boost::intrusive_ptr< OpRequest> const&)+0x5a) [0x563c7b29154a] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 12: (OSD::ShardedOpWQ::_process(un
signed int, ceph::heartbeat_handle_d*)+0x1 03d) [0x563c7b01fd9d] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 13: (ShardedThreadPool::shardedthr
eadpool_worker(unsigned int)+0x8ef) [0x563c7b5fd20f] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 14: (ShardedThreadPool::WorkThread
Sharded::entry()+0x10) [0x563c7b600510] Sep 26 15:12:58 ceph02 ceph-osd[1417]: 15: (()+0x7494) [0x7f04c56e2494]
Sep 26 15:12:58 ceph02 ceph-osd[1417]: 16: (clone()+0x3f) [0x7f04c4769aff]
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com