Awesome, that did it. I consider creating a separate Bareos device with striping, testing there, and then fading out the old non-striped pool... Maybe that would also fix the suboptimal throughput... But from the Ceph side of things, it looks like I'm good now. Thanks again :) Cheers, Martin -----Ursprüngliche Nachricht----- Von: Jens Rosenboom [mailto:j.rosenboom@xxxxxxxx] Gesendet: Dienstag, 4. Juli 2017 14:42 An: Martin Emrich <martin.emrich@xxxxxxxxxxx> Cc: Gregory Farnum <gfarnum@xxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx Betreff: Re: Rados maximum object size issue since Luminous? 2017-07-04 12:10 GMT+00:00 Martin Emrich <martin.emrich@xxxxxxxxxxx>: ... > So as striping is not backwards-compatible (and this pools is indeed for backup/archival purposes where large objects are no problem): > > How can I restore the behaviour of jewel (allowing 50GB objects)? > > The only option I found was "osd max write size" but that seems not to be the right one, as its default of 90MB is lower than my observed 128MB. That should be osd_max_object_size, see https://github.com/ceph/ceph/pull/15520 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com