Re: Pacific: RadosGW crashing on multipart uploads.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vincent

It may be Bug 50556 <https://tracker.ceph.com/issues/50556>. I am having this problem, although I don't think that characters in the bucket name is relevant.

Backport 51001 <https://tracker.ceph.com/issues/51001>has just been updated so looks as though it will be in 16.2.5.

At a glance your symptoms sound similar but I'm not sure if the crash info is the same.

Regards, Chris

On 29/06/2021 22:35, Chu, Vincent wrote:
Hi, I'm running into an issue with RadosGW where multipart uploads crash, but only on buckets with a hyphen, period or underscore in the bucket name and with a bucket policy applied. We've tested this in pacific 16.2.3 and pacific 16.2.4.


Anyone run into this before?


ubuntu@ubuntu:~/ubuntu$ aws --endpointhttp://placeholder.com:7480  s3 cp ubuntu.iso s3://bucket.test

upload failed: ./ubuntu.iso to s3://bucket.test/ubuntu.iso Connection was closed before we received a valid response from endpoint URL:"http://placeholder.com:7480/bucket.test/ubuntu.iso?uploads";.



Here is the crash log.

    -12> 2021-06-29T20:44:10.940+0000 7fae1f4ec700  1 ====== starting new request req=0x7fadf8998620 =====
    -11> 2021-06-29T20:44:10.940+0000 7fae1f4ec700  2 req 2403 0.000000000s initializing for trans_id = tx000000000000000000963-0060db861a-17e77ee-default
    -10> 2021-06-29T20:44:10.940+0000 7fae1f4ec700  2 req 2403 0.000000000s getting op 4
     -9> 2021-06-29T20:44:10.940+0000 7fae1f4ec700  2 req 2403 0.000000000s s3:init_multipart verifying requester
     -8> 2021-06-29T20:44:10.948+0000 7fae1f4ec700  2 req 2403 0.008000608s s3:init_multipart normalizing buckets and tenants
     -7> 2021-06-29T20:44:10.948+0000 7fae1f4ec700  2 req 2403 0.008000608s s3:init_multipart init permissions
     -6> 2021-06-29T20:44:10.954+0000 7faedf66c700  0 Supplied principal is discarded: arn:aws:iam::default:user
     -5> 2021-06-29T20:44:10.954+0000 7faedf66c700  2 req 2403 0.014001064s s3:init_multipart recalculating target
     -4> 2021-06-29T20:44:10.954+0000 7faedf66c700  2 req 2403 0.014001064s s3:init_multipart reading permissions
     -3> 2021-06-29T20:44:10.954+0000 7faedf66c700  2 req 2403 0.014001064s s3:init_multipart init op
     -2> 2021-06-29T20:44:10.954+0000 7faedf66c700  2 req 2403 0.014001064s s3:init_multipart verifying op mask
     -1> 2021-06-29T20:44:10.955+0000 7faedf66c700  2 req 2403 0.015001140s s3:init_multipart verifying op permissions
      0> 2021-06-29T20:44:10.964+0000 7faedf66c700 -1 *** Caught signal (Segmentation fault) **
  in thread 7faedf66c700 thread_name:radosgw

  ceph version 16.2.3 (381b476cb3900f9a92eb95d03b4850b953cfd79a) pacific (stable)
  1: /lib64/libpthread.so.0(+0x12b20) [0x7faf2dd05b20]
  2: (rgw_bucket::rgw_bucket(rgw_bucket const&)+0x23) [0x7faf38b4d083]
  3: (rgw::sal::RGWObject::get_obj() const+0x20) [0x7faf38b7bcf0]
  4: (RGWInitMultipart::verify_permission(optional_yield)+0x6c) [0x7faf38e6608c]
  5: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, optional_yield, bool)+0x86a) [0x7faf38b2db1a]
  6: (process_request(rgw::sal::RGWRadosStore*, RGWREST*, RGWRequest*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, optional_yield, rgw::dmclock::Scheduler*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >*, int*)+0x26dd) [0x7faf38b3232d]
  7: /lib64/libradosgw.so.2(+0x4a1c0b) [0x7faf38a83c0b]
  8: /lib64/libradosgw.so.2(+0x4a36a4) [0x7faf38a856a4]
  9: /lib64/libradosgw.so.2(+0x4a390e) [0x7faf38a8590e]
  10: make_fcontext()
  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.




--

Vincent Chu

A-4: Advanced Research in Cyber Systems

Los Alamos National Laboratory
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux