On Wed, Aug 30, 2017 at 5:44 PM, Bryan Banister <bbanister@xxxxxxxxxxxxxxx> wrote: > Not sure what’s happening but we started to but a decent load on the RGWs we > have setup and we were seeing failures with the following kind of > fingerprint: > > > > 2017-08-29 17:06:22.072361 7ffdc501a700 1 rgw realm reloader: Frontends > paused > Are you modifying configuration? Could be that something is sending HUP singal to the radosgw process. We disabled this behavior (process dynamic reconfig after HUP) in 12.2.0. Yehuda > 2017-08-29 17:06:22.072359 7fffacbe9700 1 civetweb: 0x555556add000: > 7.128.12.19 - - [29/Aug/2017:16:47:36 -0500] "PUT > /blah?partNumber=8&uploadId=2~L9MEmUUmZKb2y8JCotxo62yzdMbHmye HTTP/1.1" 1 0 > - Minio (linux; amd64) minio-go/3.0.0 > > 2017-08-29 17:06:22.072438 7fffcb426700 0 ERROR: failed to clone shard, > completion_mgr.get_next() returned ret=-125 > > 2017-08-29 17:06:23.689610 7ffdc501a700 1 rgw realm reloader: Store closed > > 2017-08-29 17:06:24.117630 7ffdc501a700 1 failed to decode the mdlog > history: buffer::end_of_buffer > > 2017-08-29 17:06:24.117635 7ffdc501a700 1 failed to read mdlog history: (5) > Input/output error > > 2017-08-29 17:06:24.118711 7ffdc501a700 1 rgw realm reloader: Creating new > store > > 2017-08-29 17:06:24.118901 7ffdc501a700 1 mgrc service_daemon_register > rgw.carf-ceph-osd01 metadata {arch=x86_64,ceph_version=ceph version 12.1.4 > (a5f84b37668fc8e03165aaf5cbb380c78e4deba4) luminous (rc),cpu=Intel(R) > Xeon(R) CPU E5-2680 v4 @ 2.40GHz,distro=rhel,distro_description=Red Hat > Enterprise Linux Server 7.3 > (Maipo),distro_version=7.3,frontend_config#0=civetweb port=80 > num_threads=1024,frontend_type#0=civetweb,hos > > tname=carf-ceph-osd01,kernel_description=#1 SMP Tue Apr 4 04:49:42 CDT > 2017,kernel_version=3.10.0-514.6.1.el7.jump3.x86_64,mem_swap_kb=0,mem_total_kb=263842036,num_handles=1,os=Linux,pid=14723,zone_id=b0634f34-67e2-4b44-ab00-5282f1e2cd83,zone_name=carf01,zonegroup_id=8207fcf5-7bd3-43df-ab5a-ea17e5949eec,zonegroup_name=us} > > 2017-08-29 17:06:24.118925 7ffdc501a700 1 rgw realm reloader: Finishing > initialization of new store > > 2017-08-29 17:06:24.118927 7ffdc501a700 1 rgw realm reloader: - REST > subsystem init > > 2017-08-29 17:06:24.118943 7ffdc501a700 1 rgw realm reloader: - user > subsystem init > > 2017-08-29 17:06:24.118947 7ffdc501a700 1 rgw realm reloader: - user > subsystem init > > 2017-08-29 17:06:24.118950 7ffdc501a700 1 rgw realm reloader: - usage > subsystem init > > 2017-08-29 17:06:24.118985 7ffdc501a700 1 rgw realm reloader: Resuming > frontends with new realm configuration. > > 2017-08-29 17:06:24.119018 7fffad3ea700 1 ====== starting new request > req=0x7fffad3e4190 ===== > > 2017-08-29 17:06:24.119039 7fffacbe9700 1 ====== starting new request > req=0x7fffacbe3190 ===== > > 2017-08-29 17:06:24.120163 7fffacbe9700 1 ====== req done > req=0x7fffacbe3190 op status=0 http_status=403 ====== > > 2017-08-29 17:06:24.120200 7fffad3ea700 1 ====== req done > req=0x7fffad3e4190 op status=0 http_status=403 ====== > > > > Any help understanding how to fix this would be greatly appreciated! > > -Bryan > > > ________________________________ > > Note: This email is for the confidential use of the named addressee(s) only > and may contain proprietary, confidential or privileged information. If you > are not the intended recipient, you are hereby notified that any review, > dissemination or copying of this email is strictly prohibited, and to please > notify the sender immediately and destroy this email and any attachments. > Email transmission cannot be guaranteed to be secure or error-free. The > Company, therefore, does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform any type > of transaction of a financial product. > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com