Re: Kraken rgw lifeycle processing nightly crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like wei found and fixed this in https://github.com/ceph/ceph/pull/16495

Thanks Wei! 

This has been causing crashes for us since May. Guess it shows that not many folks use Kraken with lifecycles yet, but more certainly will with Luminous.

-Ben

On Fri, Jul 21, 2017 at 7:19 AM, Daniel Gryniewicz <dang@xxxxxxxxxx> wrote:
On 07/20/2017 04:48 PM, Ben Hines wrote:
Still having this RGWLC crash once a day or so. I do plan to update to Luminous as soon as that is final, but it's possible this issue will still occur, so i was hoping one of the devs could take a look at it.

My original suspicion was that it happens when lifecycle processing at the same time that the morning log rotation occurs, but i am not certain about that, so perhaps the bug title should be updated to remove that conclusion. (i can't edit it)

http://tracker.ceph.com/issues/19956 - no activity for 2 months.

Stack with symbols:

#0 0x00007f6a6cb1723b in raise () from /lib64/libpthread.so.0
#1 <http://tracker.ceph.com/issues/1> 0x00007f6a778b9e95 in reraise_fatal (signum=11) at /usr/src/debug/ceph-11.2.0/src/global/signal_handler.cc:72
#2 <http://tracker.ceph.com/issues/2> handle_fatal_signal (signum=11) at /usr/src/debug/ceph-11.2.0/src/global/signal_handler.cc:134
#3 <http://tracker.ceph.com/issues/3> <signal handler called>
#4 <http://tracker.ceph.com/issues/4> RGWGC::add_chain (this=this@entry=0x0, op=..., chain=..., tag="default.68996150.61684839") at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_gc.cc:58
#5 <http://tracker.ceph.com/issues/5> 0x00007f6a77801e3f in RGWGC::send_chain (this=0x0, chain=..., tag="default.68996150.61684839", sync=sync@entry=false)

Here, this (the RGWGC, or store->gc) is NULL, so that's the problem.  I have no idea how the store isn't initialized, though.

at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_gc.cc:64
#6 <http://tracker.ceph.com/issues/6> 0x00007f6a776c0a29 in RGWRados::Object::complete_atomic_modification (this=0x7f69cc8578d0) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_rados.cc:7870
#7 <http://tracker.ceph.com/issues/7> 0x00007f6a777102a0 in RGWRados::Object::Delete::delete_obj (this=this@entry=0x7f69cc857840) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_rados.cc:8295
#8 <http://tracker.ceph.com/issues/8> 0x00007f6a77710ce8 in RGWRados::delete_obj (this=<optimized out>, obj_ctx=..., bucket_info=..., obj=..., versioning_status=0, bilog_flags=<optimized out>,
expiration_time=...) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_rados.cc:8330
#9 <http://tracker.ceph.com/issues/9> 0x00007f6a77607ced in rgw_remove_object (store=0x7f6a810fe000, bucket_info=..., bucket=..., key=...) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_bucket.cc:519
#10 <http://tracker.ceph.com/issues/10> 0x00007f6a7780c971 in RGWLC::bucket_lc_process (this=this@entry=0x7f6a81959c00, shard_id=":globalcache307:default.42048218.11")
at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_lc.cc:283
#11 <http://tracker.ceph.com/issues/11> 0x00007f6a7780d928 in RGWLC::process (this=this@entry=0x7f6a81959c00, index=<optimized out>, max_lock_secs=max_lock_secs@entry=60)
at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_lc.cc:482
#12 <http://tracker.ceph.com/issues/12> 0x00007f6a7780ddc1 in RGWLC::process (this=0x7f6a81959c00) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_lc.cc:412
#13 <http://tracker.ceph.com/issues/13> 0x00007f6a7780e033 in RGWLC::LCWorker::entry (this=0x7f6a81a820d0) at /usr/src/debug/ceph-11.2.0/src/rgw/rgw_lc.cc:51
#14 <http://tracker.ceph.com/issues/14> 0x00007f6a6cb0fdc5 in start_thread () from /lib64/libpthread.so.0
#15 <http://tracker.ceph.com/issues/15> 0x00007f6a6b37073d in clone () from /lib64/libc.so.6


Daniel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux