We have also encountered this exact backtrace on 17.2.6 also in combination with Veeam Backups. I suspect a regression as we had no issues before the update and all other clusters still running 17.2.5 and Veeam Backups don’t appear to be affected. -- Matthias Grandl matthias.grandl@xxxxxxxx croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx > On 20. Jul 2023, at 19:18, xadhoom76@xxxxxxxxx wrote: > > Hi, we have service that is still crashing when S3 client (veeam backup) start to write data > main log from rgw service > req 13170422438428971730 0.008000086s s3:get_obj WARNING: couldn't find acl header for object, generating > default > 2023-07-20T14:36:45.331+0000 7fa5adb4c700 -1 *** Caught signal (Aborted) ** > > And > > > " > 2023-07-19T22:04:15.968+0000 7ff07305b700 1 beast: 0x7fefc7178710: 172.16.199.11 - veeam90 [19/Jul/2023:22:04:15.948 +0000] "PUT /veeam90/Veeam/Backu > p/veeam90/Clients/%7Bd14cd688-57b4-4809-a1d9-14cafd191b11%7D/34387bbd-bec9-4a40-a04d-6a890d5d6407/CloudStg/Data/%7Bf687ee0f-fb50-4ded-b3a8-3f67ca7f244 > b%7D/%7B6f31c277-734c-46fd-98d5-c560aa6dc776%7D/144113_f3fd31c9ee2a45aeeadda0de3cbc9064_00000000000000000000000000000000 HTTP/1.1" 200 63422 - "APN/1. > 0 Veeam/1.0 Backup/12.0" - latency=0.020000216s > 2023-07-19T22:04:15.972+0000 7ff08307b700 1 ====== starting new request req=0x7fefc7682710 ===== > 2023-07-19T22:04:15.972+0000 7ff087083700 1 ====== starting new request req=0x7fefc737c710 ===== > 2023-07-19T22:04:15.972+0000 7ff071057700 1 ====== starting new request req=0x7fefc72fb710 ===== > 2023-07-19T22:04:15.972+0000 7ff0998a8700 1 ====== starting new request req=0x7fefc71f9710 ===== > 2023-07-19T22:04:15.972+0000 7fefe473e700 -1 *** Caught signal (Aborted) ** > in thread 7fefe473e700 thread_name:radosgw > > ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable) > 1: /lib64/libpthread.so.0(+0x12cf0) [0x7ff102d62cf0] > 2: gsignal() > 3: abort() > 4: /lib64/libstdc++.so.6(+0x9009b) [0x7ff101d5209b] > 5: /lib64/libstdc++.so.6(+0x9653c) [0x7ff101d5853c] > 6: /lib64/libstdc++.so.6(+0x95559) [0x7ff101d57559] > 7: __gxx_personality_v0() > 8: /lib64/libgcc_s.so.1(+0x10b03) [0x7ff101736b03] > 9: _Unwind_Resume() > 10: /lib64/libradosgw.so.2(+0x538c5b) [0x7ff105246c5b] > > ---------------------- > -- > -10> 2023-07-19T22:04:15.972+0000 7ff071057700 2 req 8167590275148061076 0.000000000s s3:put_obj pre-executing > -9> 2023-07-19T22:04:15.972+0000 7ff071057700 2 req 8167590275148061076 0.000000000s s3:put_obj check rate limiting > -8> 2023-07-19T22:04:15.972+0000 7ff071057700 2 req 8167590275148061076 0.000000000s s3:put_obj executing > -7> 2023-07-19T22:04:15.972+0000 7ff0998a8700 1 ====== starting new request req=0x7fefc71f9710 ===== > -6> 2023-07-19T22:04:15.972+0000 7ff0998a8700 2 req 15658207768827051601 0.000000000s initializing for trans_id = tx00000d94d21014832be51-0064b85 > ddf-3dfe-backup > -5> 2023-07-19T22:04:15.972+0000 7ff0998a8700 2 req 15658207768827051601 0.000000000s getting op 1 > -4> 2023-07-19T22:04:15.972+0000 7ff0998a8700 2 req 15658207768827051601 0.000000000s s3:put_obj verifying requester > -3> 2023-07-19T22:04:15.972+0000 7ff0998a8700 2 req 15658207768827051601 0.000000000s s3:put_obj normalizing buckets and tenants > -2> 2023-07-19T22:04:15.972+0000 7ff0998a8700 2 req 15658207768827051601 0.000000000s s3:put_obj init permissions > -1> 2023-07-19T22:04:15.972+0000 7ff011798700 2 req 15261257039771290446 0.024000257s s3:put_obj completing > 0> 2023-07-19T22:04:15.972+0000 7fefe473e700 -1 *** Caught signal (Aborted) ** > in thread 7fefe473e700 thread_name:radosgw > > " > > Anyone have this issue ? > Thanks > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx