You got a ~300MB object in there. BlueStore's default limit is 128MB (option name that controls it is osd_max_object_size)
I think the scrub warning is new/was backported, so the object is probably older than BlueStore on this cluster, it's only now showing up as a warning.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Thu, Jan 2, 2020 at 9:39 PM Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx> wrote:
Hi,
when Ceph is running a deep scrub it sometimes complains about an object being too large:
Dez 17 07:11:44 al23 ceph-osd[3339]: 2019-12-17 07:11:44.066 7f8d38064700 0 log_channel(cluster) log [DBG] : 8.2d scrub starts
Dez 17 07:11:44 al23 ceph-osd[3339]: 2019-12-17 07:11:44.090 7f8d38064700 -1 log_channel(cluster) log [ERR] : 8.2d soid 8:b431ae8d:::gparted-live-0.31.0-1-amd64.iso:head : size 333447168 > 134217728 is too large
Dez 17 07:11:44 al23 ceph-osd[3339]: 2019-12-17 07:11:44.102 7f8d38064700 -1 log_channel(cluster) log [ERR] : 8.2d scrub 0 missing, 1 inconsistent objects
All OSDs are BlueStore.
What is happening here?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx